Blog Archive

PHP, PDO & HP VERTICA

I thought I’d share it for those out there needing some help. I’m working with a Ubuntu server box, but these instructions could be pretty easily adapted to other distros:

1. Ensure you have the Unix ODBC package installed. This is what PDO uses to connect to the Vertica instance:

  • sudo apt-get -y install unixodbc

2. Ensure that your PHP installation has PDO support with ODBC handling:

  • php -m | grep -i odbc

You should see something like “PDO_ODBC” there if it’s installed.

3. Grab the latest Vertica linux drivers from the My.Vertica site (requires a login). Click on the Downloads section and scroll all the way down to the drivers. Grab the right ones for your linux installation. I used the Linux ODBC 64-bit.

4. Make a “/opt/vertica” directory on your system and untar the archive there.

  • sudo mkdir /opt/vertica
  • sudo tar zxvf vertica-odbc-6.1.3-0.x86_64.linux.tar.gz -C /opt/vertica/ > /dev/null

LARAVEL ROUTE PROTECTION WITH INVOKE-1

So we have our basic yaml configuration file with protection turned on. Say we wanted to add in group and permission checks too. I’ve already talked some about this kind of handling in a different post but I’ve more recently simplified it even more, no longer requiring extra classes in the mix.

Let’s start by changing our configuration file to tell Invoke that we want to be sure the user is in the “admin” group and has a permission of “delete_user” to access the /admin/user/delete resource:

  • /admin/user/delete:
  • protected: on
  • groups: [admin]
  • permissions: [delete_user]

When you fire off the page request for that URL, Invoke will try to call the InvokeUser::getGroups and InvokeUser::getPermissions methods to return the user’s current permission set. Before it required you to use classes that implemented the InvokeGroup and InvokePermission interfaces for each group/permission. I streamlined this since it’s really only evaluating string matches and allowed those methods to either return a set of objects or of strings. Let’s update the InvokeUser class to hard-code in some groups/permissions for return:

  • {
  • /** …more code… */
  • public function getGroups()
  • {
  • return [‘admin’,’froods’];
  • }
  • public function getPermissions()
  • {
  • return [‘delete_user’,’view_user’,’update_user’];
  • }
  • /** …more code… */
  • }

Ideally you’d be fetching these groups and permissions from some role-based access control system (maybe, say Gatekeeper) and returning real values. These hard-coded values will work for now.

Since the user has all the requirements, Invoke is happy and they’re able to move along and delete all the users they want.

I’ve tried to keep the class as simple as possible to use and I’m definitely open to suggestions. There’s a few additions I’d though about including adding HTTP method matching (different rules for POST than GET) and other match types than just groups and permissions.

PHP Virtual Machine – 5

VM macros

As can be seen from the previous code listing, the virtual machine implementation makes liberal use of macros. Some of these are normal C macros, while others are resolved during generation of the virtual machine. In particular, this includes a number of macros for fetching and freeing instruction operands:

  • OPn_TYPE
  • OP_DATA_TYPE
  • GET_OPn_ZVAL_PTR(BP_VAR_*)
  • GET_OPn_ZVAL_PTR_DEREF(BP_VAR_*)
  • GET_OPn_ZVAL_PTR_UNDEF(BP_VAR_*)
  • GET_OPn_ZVAL_PTR_PTR(BP_VAR_*)
  • GET_OPn_ZVAL_PTR_PTR_UNDEF(BP_VAR_*)
  • GET_OPn_OBJ_ZVAL_PTR(BP_VAR_*)
  • GET_OPn_OBJ_ZVAL_PTR_UNDEF(BP_VAR_*)
  • GET_OPn_OBJ_ZVAL_PTR_DEREF(BP_VAR_*)
  • GET_OPn_OBJ_ZVAL_PTR_PTR(BP_VAR_*)
  • GET_OPn_OBJ_ZVAL_PTR_PTR_UNDEF(BP_VAR_*)
  • GET_OP_DATA_ZVAL_PTR()
  • GET_OP_DATA_ZVAL_PTR_DEREF()
  • FREE_OPn()
  • FREE_OPn_IF_VAR()
  • FREE_UNFETCHED_OPn()
  • FREE_OP_DATA()
  • FREE_UNFETCHED_OP_DATA()

As you can see, there are quite a few variations here. The BP_VAR_* arguments specify the fetch mode and support the same modes as the FETCH_* instructions (with the exception of FUNC_ARG).

The FREE_UNFETCHED_OP*() variants are used in cases where an operand must be freed before it has been fetched with GET. This typically occurs if an exception is thrown prior to operand fetching.

Apart from these specialized macros, there are also quite a few macros of the more ordinary sort. The VM defines three macros which control what happens after an opcode handler has run:

  • ZEND_VM_CONTINUE()
  • ZEND_VM_ENTER()
  • ZEND_VM_LEAVE()
  • ZEND_VM_RETURN()

The table shows whether the macro includes an implicit ZEND_VM_CONTINUE(), whether it will check for exceptions and whether it will check for VM interrupts.

Next to these, there are also SAVE_OPLINE(), LOAD_OPLINE() and HANDLE_EXCEPTION(). As has been mentioned in the section on exception handling, SAVE_OPLINE() is used before the first potentially throwing operation in an opcode handler. If necessary, it writes back the opline used by the VM (which might be in a global register) into the execute data.

PHP Virtual Machine – 4

Smart branches

It is very common that comparison instructions are directly followed by condition jumps. For example:

L0: T2 = IS_EQUAL $a, $b
L1: JMPZ T2 ->L3
L2: ECHO “equal”

Because this pattern is so common, all the comparison opcodes (such as IS_EQUAL) implement a smart branch mechanism: they check if the next instruction is a JMPZ or JMPNZ instruction and if so, perform the respective jump operation themselves.

The smart branch mechanism only checks whether the next instruction is a JMPZ/JMPNZ, but does not actually check whether its operand is actually the result of the comparison, or something else. This requires special care in cases where the comparison and subsequent jump are unrelated. For example, the code ($a == $b) + ($d ? $e : $f) generates:

L0: T5 = IS_EQUAL $a, $b
L1: NOP
L2: JMPZ $d ->L5
L3: T6 = QM_ASSIGN $e
L4: JMP ->L6
L5: T6 = QM_ASSIGN $f
L6: T7 = ADD T5 T6
L7: FREE T7

Runtime cache

Because opcode arrays are shared (without locks) between multiple processes, they are strictly immutable. However, runtime values may be cached in a separate “runtime cache”, which is basically an array of pointers. Literals may have an associated runtime cache entry (or more than one), which is stored in their u2 slot.

Runtime cache entries come in two types: The first are ordinary cache entries, such as the one used by INIT_FCALL. After INIT_FCALL has looked up the called function once (based on its name), the function pointer will be cached in the associated runtime cache slot.

The second type are polymorphic cache entries, which are just two consecutive cache slots, where the first stores a class entry and the second the actual datum. These are used for operations like FETCH_OBJ_R, where the offset of the property in the property table for a certain class is cached.

If the next access happens on the same class (which is quite likely), the cached value will be used. Otherwise a more expensive lookup operation is performed, and the result is cached for the new class entry.

JQuery image Upload & refresh using an ASHX File -3

Scaling an Image

An image might be uploaded that’s 3000×4000 or some other ugly size. No one wants to get sent an image that size on the web for previewing… no one…. So I decide to trim it down.

  • public double GetScale(byte[] image, double width, double height)
  • {
  • try{
  • double scale = 1.0;
  • System.IO.MemoryStream ms = new MemoryStream(image);
  • System.Drawing.Image img = System.Drawing.Image.FromStream(ms);
  • double sX, sY;
  • sX = width / img.Width;
  • sY = height / img.Height;
  • ms.Close();
  • ms.Dispose();
  • ms = null;
  • img.Dispose();
  • img = null;
  • //we have the scale and the 64bit string.
  • scale = Math.Min(sX, sY);
  • return scale;
  • }
  • catch (Exception ex)
  • {
  • throw new Exception(“Error geting scale”, ex);
  • }}

I’m passing in the byte[] and the width / height that I want. After I get the scale needed, I scale the image .

  • public Image ScaleByPercent(Image image, float percent)
  • {
  • try{
  • Bitmap result = null;
  • if (image != null)
  • {
  • int destWidth = (int)((float)image.Width * percent);
  • int destHeight = (int)((float)image.Height * percent);
  • Rectangle srcRec = new Rectangle(0, 0, image.Width, image.Height);
  • Rectangle destRec = new Rectangle(0, 0, destWidth, destHeight);
  • result = new Bitmap(destWidth, destHeight);
  • result.SetResolution(image.HorizontalResolution, image.VerticalResolution);
  • using (Graphics g = Graphics.FromImage(result))
  • {
  • g.InterpolationMode =
  • ystem.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;//InterpolationMode.HighQualityBicubic;
  • g.DrawImage(image, destRec, srcRec, GraphicsUnit.Pixel);
  • }}
  • return result;}
  • catch (Exception ex)
  • {
  • throw new Exception(“Error scaling image”, ex);
  • }}
  • public byte[] ScaleByPercent(byte[] image, float percent)
  • {
  • try{
  • System.IO.MemoryStream ms = new MemoryStream(image);
  • System.Drawing.Image img = System.Drawing.Image.FromStream(ms);
  • Image i = ScaleByPercent(img, percent);
  • MemoryStream m = new MemoryStream();
  • i.Save(m, System.Drawing.Imaging.ImageFormat.Png);
  • image = m.ToArray();
  • m.Close();
  • ms.Close();
  • ms.Dispose();
  • m.Dispose();
  • m = null;
  • ms = null;
  • img = null;
  • i = null;
  • return image;
  • }
  • catch (Exception ex){
  • throw new Exception(“Error scaling image”, ex);
  • }}

JQuery image Upload & refresh using an ASHX File -2

For the 2nd part I’ll show you an easy snippet of code that can be used to upload and/or view the image. The example below deals with signatures, though it can be easily and quickly modified for any type of image.

  • string fileName = System.IO.Path.GetFileName(context.Request.Files[0].FileName);
  • string extention = System.IO.Path.GetExtension(context.Request.Files[0].FileName).ToLower();
  • HttpPostedFile file = context.Request.Files[0];
  • if (file.ContentLength == 0)
  • {
  • //no file posted.
  • rMessage = “There was no data found in the file. Please ensure the file being uploaded is a valid image file.”;
  • break;
  • }
  • byte[] bImage = new Byte[file.ContentLength];
  • file.InputStream.Read(bImage, 0, file.ContentLength);
  • context.Session[“Signature”] = bImage;
  • if (info.Successful)
  • {
  • rMessage = “Signature upload successful.”;
  • }
  • else
  • {
  • rMessage = “There was an error uploading your signature.”;
  • }

All files that are uploaded are located in context.Request.Files. Seeing I’m only uploading one image, I know it’s going to be at the 0 index in the FileCollection.

  • string contentType = Microsoft.Win32.Registry.GetValue(“HKEY_CLASSES_ROOT\\.PNG”,
  • Content Type”, “application/octetstream”).ToString();
  • utility = new Utility();
  • double scale = utility.GetScale(signature, 450, 50);
  • signature = utility.ScaleByPercent(signature, (float)scale);
  • context.Response.AddHeader(“Content-Disposition”, “attachment; filename=UserSignature.png”);
  • context.Response.AddHeader(“Content-Length”, signature.Length.ToString());
  • context.Response.AddHeader(“Cache-Control”, “no-cache, must-revalidate”);
  • context.Response.Expires = -1;
  • context.Response.ContentType = contentType;
  • context.Response.BufferOutput = false;
  • context.Response.OutputStream.Write(signature, 0, signature.Length);

Pruning the EventLog with EventLogConfig

In DNN you have the ability to configure if you would like specific EventLogTypes to be tracked or not. By default in DNN there are over 100 different events that can be tracked in the EventLog table. Many of these are turned off by default, you can configure them to be “on” by going to the Admin Logs page in the Persona Bar, and choose the Log Settings tab

In doing so you will be presented with a page that looks similar to

From here you can click on the Edit pencil on each row and enable or disable the Logging setting

You can also turn options on such as email notifications and the Keep Most Recent entries option

Some of the default options in DNN will have the Keep Most Recent option configured to a low number, like “10 Entries” but some will have them set to All. This can cause the EventLog table to fill up with many many many events, depending on how much traffic your website gets. You can go through and set these all manually through the ADMIN UI, or you can do it in bulk in the database with this simple SQL statement:

  • update Eventlogconfig
  • set keepmostrecent = 10

If you’re using the SQL Console page you can use this statement

  • update {databaseOwner}{objectQualifier}Eventlogconfig
  • set keepmostrecent = 10

Return multiple values from methods

In this article, I am going to explain how tuples can be used in C# 7 onwards to return multiple values.

Consider the following code from the console application. The method GetDivisionResults accepts two parameters namely number and divisor and returns two integers that are quotient and remainder.

  • static void Main(string[] args)
  • {
  • int number = 17;
  • int devisor = 5;
  • var result = GetDivisionResults(17, 5);
  • Console.WriteLine(“Quotient is ” + result.Item1);
  • Console.WriteLine(“Remainder is ” + result.Item2);
  • }
  • static (int, int) GetDivisionResults(int number, int divisor)
  • {
  • int quotient = number / divisor;
  • int remainder = number % divisor;
  • return (quotient, remainder);
  • }

The following function definition defines a function that returns two integer values as tuples.

  • (int, int) GetDevisionREsults(int number, int devisor)

(int, int) – defines the return type of the method, which is a tuple contains two integers. I am not concerned about the logic of the application, but see how it returns two numbers.

  • return (quotient, remainder);

Cool! Right. Let us evaluate how we can access the return values.

  • Console.WriteLine(“Quotient is ” + result.Item1);
  • Console.WriteLine(“Remainder is ” + result.Item2);

As you can see the values returned are accessed by the relative position in the Tuple. But using .Item1, .Item2 etc. from Tuple variable is not friendly. Luckily C# 7 gives you an option to define the variable names in tuples. Consider the following example.

  • static void Main(string[] args)
  • {
  • int number = 17;
  • int devisor = 5;
  • (int quotient, int remainder) = GetDivisionResults(17, 5);
  • Console.WriteLine(“Quotient is ” + quotient);
  • Console.WriteLine(“Remainder is ” + remainder);
  • }
  • static (int, int) GetDivisionResults(int number, int divisor)
  • {
  • int quotient = number / divisor;
  • int remainder = number % divisor;
  • return (quotient, remainder);
  • }

Entity Framework Core – Part 5

This will be the fifth post in a series of posts about bringing the features that were present in Entity Framework pre-Core into EF Core. The others are:

Part 1: Introduction, Find, Getting an Entity’s Id Programmatically, Reload, Local, Evict
Part 2: Explicit Loading
Part 3: Validations
Part 4: Conventions

This time I’m going to talk about something that is often requested: how can I get the SQL string for a LINQ query? If you remember, in the pre-Core days you had to do some reflection in order to get the underlying ObjectQuery and then call its ToTraceString method. Now, things are very different, although I may say, still rather tricky!

  • private static readonly TypeInfo QueryCompilerTypeInfo = typeof(QueryCompiler).GetTypeInfo();
  • private static readonly FieldInfo QueryCompilerField = typeof(EntityQueryProvider).GetTypeInfo().DeclaredFields.First(x => x.Name == “_queryCompiler”);
  • private static readonly PropertyInfo NodeTypeProviderField = QueryCompilerTypeInfo.DeclaredProperties.Single(x => x.Name == “NodeTypeProvider”);
  • private static readonly MethodInfo CreateQueryParserMethod = QueryCompilerTypeInfo.DeclaredMethods.First(x => x.Name == “CreateQueryParser”);
  • private static readonly FieldInfo DataBaseField = QueryCompilerTypeInfo.DeclaredFields.Single(x => x.Name == “_database”);
  • private static readonly FieldInfo QueryCompilationContextFactoryField = typeof(Database).GetTypeInfo().DeclaredFields.Single(x => x.Name == “_queryCompilationContextFactory”);
  • public static string ToSql(this IQueryable query) where TEntity : class
  • {
  • if (!(query is EntityQueryable) && !(query is InternalDbSet))
  • {
  • throw new ArgumentException(“Invalid query”);
  • }
  • var queryCompiler = (IQueryCompiler) QueryCompilerField.GetValue(query.Provider);
  • var nodeTypeProvider = (INodeTypeProvider) NodeTypeProviderField.GetValue(queryCompiler);
  • var parser = (IQueryParser) CreateQueryParserMethod.Invoke(queryCompiler, new object[] { nodeTypeProvider });
  • var queryModel = parser.GetParsedQuery(query.Expression);
  • var database = DataBaseField.GetValue(queryCompiler);
  • var queryCompilationContextFactory = (IQueryCompilationContextFactory)
  • ueryCompilationContextFactoryField.GetValue(database);
  • var queryCompilationContext = queryCompilationContextFactory.Create(false);
  • var modelVisitor = (RelationalQueryModelVisitor) queryCompilationContext.CreateQueryModelVisitor();
  • modelVisitor.CreateQueryExecutor(queryModel);
  • var sql = modelVisitor.Queries.First().ToString();
  • return sql;
  • }

You can see that it needs some reflection, meaning, things *may* break on a future version. I cached all of the fields to make the access faster in subsequent calls. For the time being, it works perfectly:

  • var sql1 = ctx.Blogs.ToSql();
  • var sql2 = ctx
  • .Blogs
  • .Where(b => b.CreationDate.Year == 2017)
  • .ToSql();

Entity Framework Core – Part 4

Conventions

This will be the fourth in a series of posts about bringing the features that were present in Entity Framework pre-Core into EF Core. The others are:

Part 1: Introduction, Find, Getting an Entity’s Id Programmatically, Reload, Local, Evict
Part 2: Explicit Loading
Part 3: Validations

Conventions are a mechanism by which we do not have to configure specific aspects of our mappings over and over again. We just accept how they will be configured, of course, we can override them if we need, but in most cases, we’re safe.

In EF 6.x we had a number of built-in conventions (which we could remove) but we also had the ability to add our own. In Entity Framework Core 1, this capability hasn’t been implemented, or, rather, it is there, but hidden under the surface.

  • public static class ModelBuilderExtensions
  • {
  • public static ModelBuilder AddConvention(this ModelBuilder modelBuilder, IModelConvention convention)
  • {
  • var imb = modelBuilder.GetInfrastructure();
  • var cd = imb.Metadata.ConventionDispatcher;
  • var cs = cd.GetType().GetField(“_conventionSet”, BindingFlags.NonPublic | BindingFlags.Instance).GetValue(cd) as ConventionSet;
  • cs.ModelBuiltConventions.Add(convention);
  • return modelBuilder;
  • }
  • public static ModelBuilder AddConvention(this ModelBuilder modelBuilder) where TConvention : IModelConvention, new()
  • {
  • return modelBuilder.AddConvention(new TConvention());
  • }}

Now, some new extension methods come handy:

  • public sealed class DefaultStringLengthConvention : IModelConvention
  • {
  • internal const int DefaultStringLength = 50;
  • internal const string MaxLengthAnnotation = “MaxLength”;
  • private readonly int _defaultStringLength;
  • public DefaultStringLengthConvention(int defaultStringLength = DefaultStringLength)
  • {
  • this._defaultStringLength = defaultStringLength;
  • }
  • public InternalModelBuilder Apply(InternalModelBuilder modelBuilder)
  • {
  • foreach (var entity in modelBuilder.Metadata.GetEntityTypes())
  • {
  • foreach (var property in entity.GetProperties())
  • {
  • if (property.ClrType == typeof(string))
  • {
  • if (property.FindAnnotation(MaxLengthAnnotation) == null)
  • {
  • property.AddAnnotation(MaxLengthAnnotation, this._defaultStringLength);
  • }}}}
  • return modelBuilder;
  • }

You can now reuse these conventions in all of your context classes very easily. Hope you enjoy it! Two final remarks about this solution:

  • The reflection bit is a problem, because things may change in the future. Hopefully Microsoft will give us a workaround;
  • There is no easy way to remove built-in (or provider-injected) conventions, because are applied before OnModelCreating, but I think this is not a big problem, since we can change them afterwards, as we’ve seen.