Quantcast
Channel: SQL Editor – MySQL Workbench 6.3
Viewing all 14 articles
Browse latest View live

MySQL Workbench Plugin: Execute Query to Text Output

$
0
0

In MySQL Workbench 5.2.26 a new query execution command is available, where query output is sent as text to the text Output tab of the SQL Editor. Some MySQL Workbench users liked the “Results to Text” option available in Microsoft SQL Server Management Studio. Cool thing is with a few lines of Python we implemented this command using the SQL Editor scripting API.

For full documentation on scripting and plugin development, refer to the documentation pointers page.

In this post, you will learn:

  • Python script for implementing “Results to Text”
  • How you can customize the script to deliver your own customized results format command.

Execute Query to Text (accessible from Query -> Execute (All or Selection) to Text), will execute the query you typed in textual form into the Output tab in the SQL Editor. The output is similar to that of the MySQL command line client and can be copy/pasted as plain text. But the command line client has a different, interesting output format, activated through the –vertical command line option. It changes the output from a tabular to a form-like format, where row values are displayed as column name/value pairs:

We will try emulating that format using our modified plugin.

The Original Plugin Code

The goals for the original plugin shipped with Workbench were:

  • Provide an alterntive to the Results Grid output
  • Provide MySQL CLI and MS SQL Server Studio Text Formatted results
  • Add “Execute to Text” to the Query Menu

You can locate the original code for the plugin we want to modify in the sqlide_grt.py file, in the MySQL Workbench distribution (in Windows it will be in the modules directory in the WB folder, in MacOS X it will be in MySQLWorkbench.app/Contents/PlugIns and in Linux, in /usr/lib/mysql-workbench/modules).

# import the wb module
from wb import *
# import the grt module
import grt
# import the mforms module for GUI stuff
import mforms

# define this Python module as a GRT module
ModuleInfo = DefineModule(name= "SQLIDEUtils", author= "Oracle Corp.", version="1.0")

@ModuleInfo.plugin("wb.sqlide.executeToTextOutput", caption= "Execute Query Into Text Output", input= [wbinputs.currentQueryBuffer()], pluginMenu= "SQL/Utilities")
@ModuleInfo.export(grt.INT, grt.classes.db_query_QueryBuffer)
def executeQueryAsText(qbuffer):
  editor= qbuffer.owner
  sql= qbuffer.selectedText or qbuffer.script
  resultsets= editor.executeScript(sql)
  editor.addToOutput("Query Output:\n", 1)
  for result in resultsets:
    editor.addToOutput("> %s\n\n" % result.sql, 0)
    line= []
    column_lengths=[]
    ncolumns= len(result.columns)
    for column in result.columns:
      line.append(column.name + " "*5)
      column_lengths.append(len(column.name)+5)

    separator = []
    for c in column_lengths:
        separator.append("-"*c)
    separator= " + ".join(separator)
    editor.addToOutput("+ "+separator+" +\n", 0)

    line= " | ".join(line)
    editor.addToOutput("| "+line+" |\n", 0)

    editor.addToOutput("+ "+separator+" +\n", 0)

    rows = []
    ok= result.goToFirstRow()
    while ok:
      line= []
      for i in range(ncolumns):
        value = result.stringFieldValue(i)
        if value is None:
          value = "NULL"
        line.append(value.ljust(column_lengths[i]))
      line= " | ".join(line)
      rows.append("| "+line+" |\n")
      ok= result.nextRow()
    # much faster to do it at once than add lines one by one
    editor.addToOutput("".join(rows), 0)

    editor.addToOutput("+ "+separator+" +\n", 0)
    editor.addToOutput("%i rows\n" % len(rows), 0)

  return 0

Lines 1 to 6 import some Workbench specific Python modules:

  • wb, which contains various utility functions for creating plugins;
  • grt, for working with Workbench objects and interfacing with it and
  • mforms, for creating GUIs.
@ModuleInfo.plugin("wb.sqlide.executeToTextOutput", caption= "Execute Query Into Text Output", input= [wbinputs.currentQueryBuffer()], pluginMenu= "SQL/Utilities")
@ModuleInfo.export(grt.INT, grt.classes.db_query_QueryBuffer)
def executeQueryAsText(qbuffer):

@ModuleInfo.export(grt.INT, grt.classes.db_query_QueryBuffer) declares the return type (grt.INT by convention) and argument types of the plugin function defined further down. In the line above it, a unique identifier for the plugin is given, followed by a default caption to use in places such as menus, the input values taken by the plugin and the location in the Plugins menu where it should be placed.

The plugin executes the current query, so the argument it requests is wbinputs.currentQueryBuffer() (the selected query buffer tab), which has a type of db_query_QueryBuffer. You can read more about the available types and inputs in the relevant documentation.

The code itself is straightforward:

  1. it takes the query code,
  2. executes it through the SQL Editor object that owns the query buffer and
  3. renders the output in the text Output tab.

Custom Plugin

The goals for the custom plugin are:

  • Provide an custom alterntive to the Results Grid output
  • Provide text results column name/value pairs formatted output
  • Add “Execute to Vertical Formatted Text” to the Query Menu

To create the modified version, we can copy the above plugin and make some changes.

  1. copy the plugin file from the Workbench plugins directory to some folder of yours (eg your home directory or Desktop);
  2. rename it to verticalquery_grt.py;
  3. open it in some text editor of your liking.

First, we change the module info:

ModuleInfo = DefineModule(name= "QueryToVerticalFormat", author= "WB Blog", version="1.0")

The plugin arguments are the same, so we only need to update its identifier and name:

@ModuleInfo.plugin("wbblog.executeToTextOutputVertical", caption= "Execute Query Into Text Output (vertical)", input= [wbinputs.currentQueryBuffer()], pluginMenu= "SQL/Utilities")
@ModuleInfo.export(grt.INT, grt.classes.db_query_QueryBuffer)
def executeQueryAsTextVertical(qbuffer):

You can see the body of the function in the complete sample module file here.

Trying it Out

To install the module, you can use the Scripting -> Install Module/Script File… menu command. Select the newly created plugin file (verticalquery_grt.py) from the file browser and click Open.
Once installed, restart Workbench and run it:

You can download the entire, modified sample plugin here


MySQL Workbench: PHP development helper plugins

$
0
0

In the new MySQL Workbench 5.2.35, a plugin that will be of interest to PHP developers, both experienced and newbies, has been added.
The plugin contains a couple of functions that allows you to create PHP code straight out of your current work in the Workbench SQL Editor, ready to be pasted to your PHP program.

Screenshot of SQL Editor with PHP Plugins

Copy as PHP Code (Connect to Server) This first plugin will take the parameters from your currently open connection to MySQL and create PHP code to connect to it.

$host="p:localhost";
$port=3306;
$socket="/var/mysql/mysql.sock";
$user="root";
$password="";
$dbname="";

$con = new mysqli($host, $user, $password, $dbname, $port, $socket)
	or die ('Could not connect to the database server' . mysqli_connect_error());

//$con->close();

Not a big deal, but saves some typing for getting something going quickly.

Copy as PHP Code (Iterate SELECT Results) This one will get your query and generate code to execute it and then iterates through the results. It will also parse the SQL and substitute any SQL @variables you use in it with PHP variables that will be bound to the statement before execution. Resultset rows will be bound to PHP variables with the same name as the field (or alias, if your query specified one).

So for the following query:

set @before_date = '1990-01-01';
set @after_date = '1980-01-01';

SELECT
    emp_no, first_name, last_name, hire_date
FROM
    `employees`.`employees`
WHERE
    `hire_date` < @before_date AND `hire_date` > @after_date;

you would get this back:

$query = "SELECT      emp_no, first_name, last_name, hire_date FROM     `employees`.`employees` WHERE     `hire_date` < ? AND `hire_date` > ?";
$before_date = '';
$after_date = '';

$stmt->bind_param('ss', $before_date, $after_date); //FIXME: param types: s- string, i- integer, d- double, b- blob

if ($stmt = $con->prepare($query)) {
    $stmt->execute();
    $stmt->bind_result($emp_no, $first_name, $last_name, $hire_date);
    while ($stmt->fetch()) {
        //printf("%s, %s, %s, %s\n", $emp_no, $first_name, $last_name, $hire_date);
    }
    $stmt->close();
}

This should be enough for letting you quickly create a PHP program for doing something with the results of a parameterized query, straight out from your normal SQL development workflow and as a bonus, and be safe from injection bugs as a bonus.

Adding your own plugins

The plugins are simple, but more along these lines will be added in the future. And, more importantly, you can modify it to support your own needs. Here’s how:

First of all, find where’s the plugin file. The filename is code_utils_grt.py and you should be able to find it searching in the WB installation folder. To have your own version, rename it to something else like my_code_utils_grt.py, change a few identifiers so it won’t collide with the original built-in plugin as described below and use Scripting -> Install Plugin/Module… to install it to the correct place.

You can use the plugins there as a starting point for your own or modify them to match your coding style, choice of PHP driver etc

The important things you need to change in the plugin copy before installing are:

  1. the plugin name from CodeUtils to something else.
  2. ModuleInfo = DefineModule(name= "CodeUtils", author= "Oracle Corp.", version="1.0")
    
  3. the individual plugin names and identifiers (or just comment them out) and maybe the function names.
  4. @ModuleInfo.plugin("wb.sqlide.copyAsPHPConnect", caption= "Copy as PHP Code (Connect to Server)", input= [wbinputs.currentSQLEditor()], pluginMenu= "SQL/Utilities")
    @ModuleInfo.export(grt.INT, grt.classes.db_query_Editor)
    def copyAsPHPConnect(editor):
    

    The first parameter in @ModuleInfo.plugin is the plugin name and the 2nd is the caption. You can leave everything else, especially the metadata about the input parameters etc.

Here’s a sample plugin that you can use as a template. It’s stripped to the basics for easier understanding:

@ModuleInfo.plugin("wb.sqlide.copyAsPHPQuery", caption= "Copy as PHP Code (Run Query)", input= [wbinputs.currentQueryBuffer()], pluginMenu= "SQL/Utilities")
@ModuleInfo.export(grt.INT, grt.classes.db_query_QueryBuffer)
def copyAsPHPQuery(qbuffer):
    sql= qbuffer.selectedText or qbuffer.script

    text = 'print "the query is %s\n"' % sql.replace('"', r'\"')

    mforms.Utilities.set_clipboard_text(text)

    mforms.App.get().set_status_text("Copied PHP code to clipboard")
    return 0

MySQL Workbench 5.2.36: What’s New

$
0
0

MySQL Workbench 5.2.36 is now out and brings a lot of improvements across the board, with special focus on the Query Editor. We’ll cover some of that here:

Redesigned Query Editor

    • The log of executed commands and server responses is now always visible while resultset grids and the query editor can be resized according to your needs. Resultsets are also grouped in the same tab as the query editor that generated them.
    • SELECT queries are now analyzed as in the old MySQL Query Browser tool and, if possible, its resultset can be edited in the grid. If the resultset cannot be edited, you can place the mouse over the ReadOnly label and view the reason.

  • Improved snippets manager and editor, allows having snippets list always at hand, while editing can be done without disrupting work on the main query area.
  • Editor state is now properly saved between sessions. Sidebar sizes, the last selected schema and other state information is now properly remembered between sessions.
  • Script and resultset tabs can be reordered.
  • Keyboard navigation of resultsets has been fixed to properly handle Tab key navigation in all platforms.
  • The schema tree will now work with multiple selections, allowing the same action to be performed with more than one object at a time. You can select the columns to appear in a SELECT statement.
  • A filter box was added to the live schema tree, allowing you to restrict the number of visible items in the tree to what you’re interested in.
  • The schema tree was expanded to show more information about objects. In addition to schemas, tables, column, views and routines it also displays information about triggers, indexes and foreign keys. The object information box has been improved.
  • Export recordsets to Excel, JSON and XML files that match the format used by MySQL. The export code has also been revamped to make writing custom export formats easier, which will be explained in a future post.

Improved Administrator

The Administrator was also improved in the following areas:

  • Server Start and Shutdown page was updated to include server error log output
  • The Log browser support was improved to work with log files, in addition to log tables, when managing local and remote servers.
  • Export/Import layout was cleaned up and is now roomier and less cluttered.

Modeling

The focus of this release was on the SQL Editor, so there isn’t much new here. But heavy modeling users on Linux and Mac should be happy to know that the catalog tree has been finally fixed to stop scrolling back to the top.

 

We’re still working hard on adding and finishing more improvements to the major features we already have in MySQL Workbench, to make it the best and easiest to use database tool; but we think this is a good step forward.

MySQL Workbench 6.0: What’s New

$
0
0

With the first beta of MySQL Workbench 6.0 just released, we’ll go through the list of improvements we’ve made since 5.2.47

New Home Screen

The Home screen went through a renovation and now has a modernized look. As part of the SQL Editor and Administration GUI unification, there’s now a single list for MySQL connections. Recently opened model files and other major features are also accessible from it.

You can organize different connections into “folders” by right clicking on a connection and selecting “Move to Group…” in the context menu.

New server connections can be added by clicking the + button next to the MySQL Connections heading. By clicking the Configure Remote Management… button in the new connection setup dialog, you can add server management capabilities to the connection. As before, SSH access with “sudo” is needed for remote management.

The wrench icon next to the heading brings up the connection editor, which lets you change connection and management parameters from an editor interface. Configuration was simplified compared to 5.2.

SQL Connections

In MySQL Workbench 6.0, the SQL Editor and Administrator interfaces were merged together. You can now access administration functionality, such as restarting the server or listing connections from the same database connection tab. The primary sidebar now has both the familiar Schema tree and the administration items.

If you’d like more space for the Schema tree, you can click the expand button next to the SCHEMAS heading and give it more vertical space.

Schema Inspector

Schema Inspector, allows you to browse general information from schema objects in the server. For tables, there’s also a Table Maintenance panel, from where you can perform maintenance tasks such as ANALYZE, OPTIMIZE, CHECK and CHECKSUM TABLEs. To access, right click a schema and select Schema Inspector.

Table Data Search

You can select schemas and/or tables to perform client-side searches for arbitrary strings and patterns on their contents.

Table Templates

If you find yourself wishing for more control over the default column definition and often create tables having the same common set of columns, you can now create templates for them. The same templates can be used in the SQL Editor and also in the EER Modeling tool.

 

Improved Server Status

More information about your server in a glance.

Context Sensitive Help

Online, context sensitive help is available in query editors. Place the cursor over a SQL keyword and the help tables in the server will be queried for it. This is equivalent to the HELP keyword from the command line client. To disable help, hide or switch the sidebar pane to a different tab.

Vertical Query Output

A new text mode, vertical query output was introduced. This is equivalent to the \G option from the command line client and outputs the results of a query laid out in Column/Value pairs, one value per row. This improves readability of certain types of resultsets. Ctrl+G/Cmd-G can be used as a shortcut for that command.

Cascaded Delete Statement Generator

You can now generate the list of DELETE statements that would be needed to delete a given row from a table, in case there are other tables with foreign keys that reference them (which would prevent the row to be deleted). Select a table in the Schemas tree and from the context menu, select Copy to Clipboard -> Delete with References. A similar feature for SELECT statements will generate the queries that would list the rows to be deleted from other tables.

 

Improvements to Log Viewer

The log file viewer was improved for when browsing files that are not readable by the current user.

Modeling

Synchronization

DB modeling got several bug fixes, specially in Synchronization. You can now Synchronize a schema with another having a different name (like sakila in your model vs sakila_test in the server). You can also fix object name mapping issues (when a table or column cannot be automatically recognized as being the same, because of renames) using the new Table and Column Mapping editor.

Syntax Highlighting was also added to the various places where SQL code is visualized.

Table Templates

Quickly add tables using table templates, having any number of columns with the attributes you want.

Improved SQL Editors

A standardized toolbar was added to all SQL code editors, where you can export/import files, reformat the SQL, perform find/replace etc.

Migration Wizard

The migration wizard was extended to support 2 new database sources. You can now also migrate from Sybase SQL Anywhere and SQLite. That makes the total list of supported sources to:

  • MS SQL Server 2000, 2005, 2008, 2012
  • Sybase Adaptive Server Enterprise
  • PostgreSQL
  • Sybase SQL Server
  • SQL Anywhere
  • SQLite
  • and MySQL to MySQL migrations/database copies
This should get you started with the new stuff, but we’ll keep posting articles with details about each feature in the coming weeks, so make sure to come back for more info!

MySQL Workbench 6.0: Table Data Search

$
0
0

scr 1. Location of Search table data on the main toolbar

One of the new features of MySQL Workbench 6.0 is Table Data Search. The main purpose of this was to ease data searching through the whole instance. Previously, we needed to use some tricks to get the query to run over all schemas that we’ve got on the server. Now it’s easy to find the searched term with much less hassle. This functionality is easy to use and provides searching through all columns and even all types. However, we can’t forget that due to the nature of this tool we must take some precautions to not overload your server.

To use this functionality we pick it up from the Database menu called “Search Table Data…” or just click the icon on the main toolbar (scr 1). The third option is to select Search Table Data.. from context menu when you right click on the schema list on some schema.
After that you will see the new screen (scr 2) with a few options which you must provide to get started working with it.

scr 2. Search table data window

As you can see, the interface is very simple, but I’ll still try to explain some of the options. The first thing is the Search for Text input, where you just put the phrase that you’d like to find. The type of the phrase depends on the select box that is located below this input (scr 3). That select box has three different options of search type which are:

Search using =
Search using LIKE
Search using REGEXP

The first one Search using =, is the simplest one, it just matches fields using the = operator.
Second option is little more powerful, it allows you to search using the database LIKE  operator where you can provide wild cards like % (match any character any number of times) or _ (match any char, a single time).
The third option allows you to use regular expressions.

scr 3. Search table data match options

Next to the selection box, you’ll see two inputs described as Max. matches per table, and Max. total matches. The first one is responsible for limiting search occurrence through one table. The second one will limit the whole search, so when there will be over 10000 (initial value) entries, search will stop. Last option is the check box named Search columns of all types. Initially searching is done through only text fields, when this option is enabled, then all columns will be used and will be cast to the char data type. This check box have a great impact over whole server performance use it with caution!

Now that you know how each option relates to the searching, I’ll try to step through a sample search and describe the results. I assume that you’ve got sakila database, cause I’ll make a sample using that database.  

  1. open the Search table data…
  2. enter text mary into the Search for Text field
  3. check if the Search using option is set to the equal sign
  4. check if the Search columns of all types check box is unchecked
  5. select the schema (or column), that you’d like to be searched for the matching phrase in the schema selector
  6. press the Start Search button.

scr 4. Search table data results view

As in the screen shot (scr 4), your result should be the same. You can click on the arrow in the result list to expand details of the row. The columns are as follows:

Schema – name of the database that holds the columns that matched the criteria
Table – name of the table
Key – primary key value assigned to the data that match criteria
Column – column name that holds the matched data
Data – the top row will contain information, how many times the phrase were matched in the details it will contains the matched data

There is also context menu for the result set, it’s available when you right click on the result set. The menu allows you to copy query that where used to find the matching rows. you can also copy query that will match the rows against primary key, and the last option allows to copy the key values.

And that’s all, you’ve done your first search using the new Search Table Data option. Please remember that using this feature have very big impact on general server performance because you’re generally doing full table scans. Stay with us to get more information cool information.

MySQL Workbench 6.0: Help is on the way…

$
0
0

Do you know this scenario: you are writing down  a stored procedure but you can’t for the life of you remember the exact syntax of that CASE statement? Has it to end with CASE or not? Can I use more than one WHEN part and how should that be written? Usually you end up opening a web page and read through the excellent MySQL online docs. However, this might cost too much time if you quickly need different statements and other detail info. Here’s where MySQL Workbench’s context help jumps in.

The server can help

It’s probably only known to the die-hard terminal operators who write most of their SQL queries in a MySQL console window: the MySQL server already has a stripped down set of help topics produced by the Docs team. That means you can always get at least the syntax but often far more information for a particular syntax element when you work with a server. When you install a MySQL server it usually comes with loaded help tables. For those experimenting with pre-releases of the server you can download scripts to fill your help tables.

MySQL Workbench makes use of this little gem. The query at the current caret position is examined to find a help topic. If one could be found it is shown in the Context Help side bar.

WB Screenshot (Windows) SQL Editor Context Help

But wait, isn’t the output of the help topic in the command line client rather simple? Yes, that’s how it looks:

WB Screenshot (Windows) Help in the command line client

But it still is the same data as shown in the sidebar. With some reformatting MySQL Workbench presents a much better readable version of that text. And not only that. URLs are displayed as clickable links. Selecting a link to another topic simply switches to that in the sidebar. External documentation is shown in a separate browser tab. So what you actually get is both: the quick lookup while typing and direct access to the online help with the extended info for that topic. This will often save you from doing an explicit online search. Pretty cool, hu? But there’s more.

To the point

Writing a complex query can easily cover 2-3 pages and when you’re down in that dreaded JOIN part you certainly don’t want the help for the SELECT statement. And MySQL Workbench can deliver that. It looks at the word at the current caret position (math operators qualify too) and if that doesn’t have a topic to show the next inner part is examined (e.g. a subselect or a JOIN). So you get help as close as possible for your current work position.

 

WB Screenshot (Windows) SQL Editor Context Help with inner part

Determining a topic is complicated, especially if your query is in a half finished state, so it might not always give you the wanted info, however. But we will improve that feature over time.

MySQL Workbench: Vertical Query Output

$
0
0

MySQL Workbench have one nice feature which is probably a stranger for some of us. The name of this feature is vertical query output, it help in situations where the standard Workbench output will not be very useful. This functionality is very easy to use and in this post I’ll try to visualize some of it’s benefits.

First we need to know how to use it, so we’ve provided you two options to execute the query with vertical output. One of them is the menu bar where you can find item named Execute vertically, you’ll also find hint about the shortcut for that option it’s CTRL+ALT+RETURN.

After you know how to get the vertical query output, I’ll show you some screen shots to compare it with command line output.

Let’s take the command that suits best to this type of output, it’s SHOW ENGINE INNODB STATUS. Normally to understand the output, you probably copy it to some notepad app, and add line breaks. Well it was a little annoying, especially when you know how does it look in command line client with \G. So let’s take a look for the output of console  and Workbench.

Vertical output console preview Vertical Output Workbench preview

You should find out that it’s the same view as in the console. Below you’ll see how it looks in Standard Output

vertical_output_show_engine_normal

and with Text Output.

vertical_output_show_engine_text

Here is also one more screen shot of the EXPLAIN query:

vertical_output_explain

Please fell free to comment this, and let us know how do you like it.

MySQL Workbench 6.1: Query Result Enhancements

$
0
0

The SQL Editor in MySQL Workbench 6.1 adds a new interface for query results.  This addition offers 4 new views to your query results, 2 for the result data itself and 2 for meta-information about the query that was executed.

Query Result Set Grid

The traditional result set grid. Run a SELECT query on a table with a primary key and you can edit the data. You must click the Edit button to enter edit mode.

Note: Until Workbench 6.1.1, the check was being done automatically for every SELECT query, but since that requires extra queries to MySQL, the check is now done on demand.

 Screenshot 2014-01-29 15.05.44

 

Result Set Form Editor

The new form editor for result sets comes in handy when you want to closely inspect the fields of each record (specially if it has multi-line text). You can also edit the individual records, if your result set is editable.

Screenshot 2014-01-29 15.05.56

Result Set Field Types

Here, you can inspect information about the fields that were returned by the MySQL server in your query results. Similar to the —column-type-info option from the command line client, it will show you the schema and table from where the field comes from and type information.

Screenshot 2014-01-29 15.06.04

Performance Schema Statistics

This tab uses data gleaned from the performance_schema (in MySQL 5.6) to gather some key statistics collected about the execution from your query, as collected by the server. To have this tab, you need to have the performance_schema enabled with statement instrumentation.

Screenshot 2014-01-29 15.14.36

You can read about the meaning of each item in the MySQL performance_schema documentation, but here’s a summary of some key items:

  • Timing: the timing information shown in the Action Output area in Workbench is the query execution time as measured at the client side, so it will include network travel time. But here you also have the timing as instrumented by the server itself. This includes the amount of time waiting for table locks, as a separate value.
  • Rows Processed: the number of rows that were evaluated and then generated to be sent back to the client
  • Temporary Tables: the number of temporary tables that had to be created for the query to be executed
  • Joins per Type: the number of JOINs that were executed for the query, broken down by type. This is similar to the info you’d get from EXPLAIN.
  • Sorting: number of data that had to be sorted by the server.
  • Index Usage: you can see here whether table scans had to be performed without using an index.

You can disable fetching of this information from the Query -> Collect Performance Schema Stats menu item. You may want to do that if you don’t need the stats, since an extra query has to be executed for every query you run.

 

 


MySQL Workbench 6.1: Server Variables grouping

$
0
0

MySQL Workbench has an option to view MySQL server variables divided into groups [img. 1], for example: Binlog, General, Keycache, Performance, etc. This is okay if we just wanted to look around, but it can become overwhelming as sometimes we only want to monitor specific variables from different groups.

Server Variables main view

img.1. Server Variables main view

In MySQL Workbench 6.1, we solve this by implementing Custom Groups. It’s a special group that can be created by the user. At the end of the Category List, there is already one defined group, called Custom. When selected, you’ll find a description in the Variable List [img. 2].

Server Variables custom group

img. 2. Server Variables custom group

 

Variable grouping is easy. You simply right-click the chosen variable, and choose an option from the context menu.

Server Variables multiple selection

pct. 3. Server Variables multiple selection add to group

 

The “Add to Custom Category…” menu item popups a mini editor that allows you to create or remove your own custom variables groups [img. 4].

Server Variables group editor

img. 4. Server Variables Group Editor

 

You can also directly add a variable to a group by using the menu items that are located below the “Add to Custom Category…” context menu item. The groups you create will be shown in the Category list, and you only need to select them [img 5].

Server Variables Custom Group Variables

img. 5. Server Variables Custom Group Variables

 

To remove a variable from a custom group, select the corresponding group, and then right-click to open the context menu for the variable you want to remove, and choose the remove option [img. 6].

Server Variables Variable Removal

img. 6. Server Variables Variable Removal

 

Variables Groups are stored on the user level. In other words, each connection will have the same category groups.

We hope this new feature will help you organize your work a little bit better.

MySQL Workbench 6.2: Spatial Data

$
0
0

The Spatial Viewer

MySQL 5.7 will include much awaited GIS support for InnoDB tables. To make it easier to quickly visualize spatial/geometry data in geographic context, Workbench 6.2 includes a viewer for resultsets containing that type of data. The viewer will render data from each row as a separate clickable element. When clicked, you can view the rest of the data from that row in the textbox. If you have multiple queries with geometry data, you can overlay them in the same map.

spatial_viewer

But that’s not all the features. The Spatial Data Viewer give you the possibility to display the data using different projection systems. Right now you can use Robinson, Mercator, Equirectangular, Bonne. There’s option to even merge different resultsets, execute all of them and switch to Spatial View, you’ll notice several layers for each resultset. You can also zoom in/out, and jump to specific location.

 

The Geometry Viewer

Both the Field and Form Editors were updated to support the GEOMETRY datatype. You can view geometry data like polygons from a single row as an image or as text, in any of the common WKT, GeoJSON, GML or KML formats. form_editor

MySQL Workbench 6.2: It’s all about the Query

$
0
0

Improved Visual Explain

In MySQL 5.7, the Optimizer Team has been doing great work in refactoring as well as innovation with the new Cost Model. The improved Visual Explain enables the DBA to now get deeper insights into Optimizer decision making, for improved performance tuning of queries. explain   The UI was also improved to allow easier navigation in large query plans.

Streamlined Query Results Panel

The query results panel was updated to centralize the many features related to result sets into a single location. Result Grid, Form Editor, Field Types, Query Stats, Execution Plan (including the traditional and Visual Explain) and the new Spatial Viewer are all easily accessible from a single interface. Screenshot 2014-09-05 14.35.39

Run SQL Script

It often happens that people try to load gigantic SQL script files into the Workbench SQL editor just to execute them. That will rarely work, as loading files for editing uses a lot of memory and Workbench does a lot of processing in the editor (syntax highlighting, syntax checking, code folding etc). To execute arbitrarily large scripts easily, you can now use the dialog at File -> Run SQL Script: Screenshot 2014-09-05 14.48.35   The dialog lets you preview a part of the script, specify a default schema (in case it’s not already defined) and a default character set to use when importing it. The output window shows warnings, messages and a nice progress bar.

Shared Snippets

SQL Snippets are useful to store queries and commands that are used often, but until now they could only be stored locally. In 6.2, you can now store snippets in the MySQL server you’re connected to and anyone anywhere who can access the .mysqlworkbench schema can also use these snippets.

Small changes

Resultset grid columns are now automatically resized to fit – and if you manually resize a column, the customized size is remembered, so next time you run that query again, the columns will be back to the size you left them.

Customize font for resultset grid - some people want to cram more text in the resultset grid, some people prefer bigger, easier to read text. Now you can pick what you like in Preferences.

Improved state saving for the SQL Editor – Opened, closed and reordered tabs are now properly saved and restored. The scroll position and cursor location is also remembered.

MySQL Workbench 6.2: Usability improvements and more

$
0
0

Direct Schema Tree Action Buttons

Screenshot 2014-09-05 15.00.21 The schema tree in the SQL Editor now has some very convenient buttons for accessing the most used functions for each object type:

  • Table or Schema Inspector
  • Object structure editor
  • Table data browser/editor
  • Call Stored Procedure or Function

Format Note Objects in Diagrams

Note objects in diagrams can now be resized and have its contents automatically rearranged. You can also change style attributes like font, background color and text color. Screenshot 2014-09-05 15.07.27

Other improvements and bug fixes that make a difference

MySQL password is remembered for the session, even if not stored in the keychain, so you don’t need to re-enter it when a new connection is needed.

Keyboard shortcuts now work in the Scripting Shell.

Platform Updates

MySQL Workbench 6.2 also finally adds native 64bit support for Windows. This should allow working with larger data sets and script files. Oracle Linux/RHEL 7 support was added. To improve quality and user experience, we will be providing 64-bit binaries for Linux. Linux users who want 32-bit binaries, can compile from source.

Parsing in MySQL Workbench: the ANTLR age

$
0
0

Some years ago I posted an article about the code size in the MySQL Workbench project and talked a bit about the different subprojects and modules. At that time the project consisted of ~400K LOC (including third-party code) and already then the parser was the second biggest part with nearly a forth of the size of the entire project. This parser project back then used the yacc grammar from the MySQL server codebase and was our base for all parsing tasks in the product. Well, things have changed a lot since these days and this blog post discusses the current parsing infrastructure in MySQL Workbench.

antlr-logoWe started looking into a more flexible way of creating our parser infrastructure. Especially the generation of lexer and parser from the grammar was a long winded process, that included a lot of manual tweaking. The most important advantage of using the MySQL server yacc grammar is however that we always stay in sync easily. Though, this is true only for the server version the grammar we picked is for. But MySQL Workbench needs more flexibility, supporting a whole range of server versions (from 5.1 up to the latest 5.7.8). Hence we decided to switch to a different tool: ANTLR. Not so surprising, however: the yacc based parser is still part of MySQL Workbench, because it’s not possible to switch such an important part in one single step. However over time the ANTLR based parsers will ultimately become our central parsing engine and one day we can rule out the yacc parser entirely.

Files created by ANTLR are the biggest single source files I have ever seen. The MySQLLexer.c file is 40MB in size with almost 590K LOC. No wonder our project metrics have changed remarkably, though not only because of the ANTLR based parser. Here are the current numbers (collected by a script shipped with the source zip):

machine:workbench Mike$ sh tools/count_loc.sh

c (antlr parser): 1484033 loc         6 files
cpp: 418494 loc       704 files
cxx: 28484 loc         2 files
mm: 31926 loc        97 files
m: 9795 loc        37 files
py: 87652 loc       170 files
cs: 43149 loc       150 files
h: 143743 loc       928 files
Total: 2247276 (763243 without ANTLR parser)
Total Files: 2094 (1166 without headers)

The reason for the big size is the support of the full Unicode BMP for identfiers, which requires some really big state tables in the lexer.

ANTLR 3 – The Workhorse

donkey-36532_1280The current version of ANTLR is 4, published almost 2 years ago. However, we are still on version 3, for a good reason. ANTLR can generate parsers for various target languages, like C#, Java and Python. However, still today, there is no C or C++ target for ANTLR 4, while ANTLR 3 supports both languages well. Hence we decided to stay with ANTLR 3 and with every addition we do (e.g. see the code completion engine) we are more tight to it and unlikely to upgrade to version 4 any time soon. At least a C target should have been one of the first targets, really.

But why not stay with the server’s parser, you might ask. It’s thoroughly tested and obviously is as compatible as a parser can be for the MySQL language. Well, a flexible client tool has different needs compared to a server and that’s why. It starts with the ability to support multiple server versions (the server parser always only supports its current version), continues with different requirements for handling errorneous sql code and really goes own ways when it comes to tooling (like the mentioned code completion engine or the quick syntax checker). ANTLR 3 generates socalled top-down parsers (recursive descent), while YACC creates bottom-up parsers, which use a different approach to parse text. Our ANTLR based parser usually gives better error message, e.g. for a query like:

select 1, from sakila.actor;

the server shows that dreaded “Error Code: 1064. You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘from sakila’ at line 1”), while the ANTLR parser says: Syntax error: unexpected comma (with precise position). You can see this in action in the MySQL Workbench SQL editors (via error tooltips).

Another really useful ability in an ANTLR based parser is that you can use any rule in the grammar as starting point, which is not easily possible with a YACC based parser. All grammar rules are generated as functions (remember: recursive descent parser). So you can always call any of them with your input, e.g. you can easily only parse expressions, instead of a full query. We use this ability to parse SQL code in our object editors (stored procedures, triggers etc.) which implicitly disallows any other SQL code not allowed at this point. Also datatype parsing uses the new parser, allowing for maximum conformity of user specified data types and good feedback to the user in case of an error. For developers very important is also the fact that you can easily debug the parser code, if needed. Try that with a YACC based parser which is only iterating over states.

Finally, the server grammar has got quite a number of problems, like a big number of shift-reduce conflicts, a handwritten lexer which is hard to maintain, trouble with semantic action execution (multi execution if not carefully placed) and others. However, the server team is constantly working on improving their parser. It’s just not a good choice for MySQL Workbench.

We decided to use the C target from ANTLR because the C++ support was not only incomplete (and still is) but lead to huge compilation times. Integrating a C module in a C++ environment is trivial and rewards us with a high performance parser.

A Dynamic Parser

Above I mentioned that a GUI tool like MySQL Workbench needs to support multiple server version. Additionally, it must be possible to switch behavior based on the SQL mode. All this is possible by use of socalled semantic predicates. These constructs allow to switch rules or alternatives off and on based on some condition (e.g. the server version). This will ensure a user will always get the right syntax check and proper code completion, regardless which server he actually connects to. This goes so far that we can easily toggle language parts that were only valid for a certain version range (e.g. the NONBLOCKING keyword, which was valid only between 5.7.0 and 5.7.6).

Though we not only use the generated files for standard parsing tasks, but also have a dedicated scanner (based on the lexer) that helps us in determining a context for the context sensitive help. This way we can even handle partially invalid SQL code easily.

In opposition to the server grammar and parser generation from it, our ANTLR parser is generated and compiled automatically during a build and only when there was a change in the grammar file. This allows for easy development of the parser grammar. A test tool exists that uses the same parser and allows to take a deeper look at a query and the parsing process, by printing the AST (abstract syntax tree) in a windows. Additionally it allows to run a subset of our parser unit tests and even can generate test result code we use to feed certain parser tests.

The Parsers in Code

Generating the parsers requires Java installed (because ANTLR is a Java tool), which is the reason why we include the generated files in the source package. This way you are not forced to install Java when you want to build MySQL Workbench. The generation step is simply skipped as the grammar and generated files have the same current timestamp. However, as soon as you change the grammar you will need Java (and the ANTLR jar) to regenerate the parser files, when you build MySQL Workbench yourself.

Starting with Workbench 6.3 we use 2 parser variants: one that generates an AST (abstract syntax tree) and one without. The latter is used for our quick syntax checker as it is twice as fast as the one generating an AST (generation cannot be switched dynamically). The AST however is needed to easily walk the parsed elements, e.g. to find the type of a statement, convert query details into our internal representation, manipulate (rewrite) queries and other things.

The entire parsing engine is organized in 3 layers:

  • The generated C parsers wrapped by small classes to provide a C++ interface (including a tree walker, the mention syntax checker and a scanner). You can find all related files in the “library/mysql.parser” subfolder. The generated and wrapper files are:
    •  MySQL.tokens (a list of token names and their values)
    • MySQLLexer.h/c (the generated lexer)
    • mysql-scanner.h/cpp (the C++ wrapper around the generated lexer)
    • MySQLParserc.h/c (the generated (full) parser)
    • mysql-parser.h/cpp (the C++ wrapper around the generated (full) parser)
    • MySQLSimpleParser.h/c (the generated parser without AST creation) + its token file
    • mysql-syntax-check.h/cpp (the C++ wrapper for that, it shares the lexer with the main parser)
    • Some support files (mysql-parser-common.h/cpp, mysql-recognition-types.h)
    • The “library/mysql.parser/grammar” folder contains the 2 grammar files (full + simplified parser), build scripts for each platform and the test application (currently for OSX only).
  • A module providing our socalled parsing services, including parsing of individual create statements (e.g. for our table or view editors). The parsing services mostly deal with conversion of sql text into our grt tree structure, which is the base for all object editors etc. Currently this is separated into a dynamically loadable module, containing the actual implementation and an abstract class for direct use of the module within Workbench. The related files are:
    • modules/db.mysql.parser/src/mysql_parser_module.h/cpp (the module with most of the actual code)
    • backend/wbpublic/grtsqlparser/mysql_parser_services.h/cpp (the abstract interface for the module + some support code)
  • The editor backend driving the UI, connecting the parsing services and implementing error checking + markup as well as code completion. This layer is spread over a couple of files, all dealing with a specific aspect of handling sql code, which includes query determination and data type parsing as well as our object editors and the sql editor backend. This backend is a good example of the integration of GUI, Workbench backend and parser services including syntax checks and code completion (backend/wbpublic/sqlide/sql_editor_be.h/cpp).

Conformity

After some years while our grammar and parser matured we reached not only full conformity with the server parser, but could even add language features that aren’t released yet. Our grammar is as close to the server parser as one can be and is the most complete grammar you can get for free (as part of the MySQL Workbench package, which ships the grammar not only in the source zip, but also in the binary package, because it is needed for code completion, or download it MySQL.g). Once in a while (also before big releases) we scan all changes in the server’s sql_yacc.y file and incorporate them, to stay up to date.

Additionally, we have a large set of unit tests that check proper behavior of the generated parser. Some of them (e.g. the sql mode and operator precedence tests) were taken from the MySQL server tests. We have a set of ~1000 queries of all types, to cover most of the language and a special set of commands that stress the use of identifiers (as documented for MySQL) as well as huge table definitions with hundreds of columns and indices etc.

Big Thanks

Finally I’d like to express my respect and thankfulness for the guys that stand behind such an extremely useful and well done tool as ANTLR is. These are mainly Prof. Terence Parr (the ANTLR guy) and Sam Harwell for their dedication to ANTLR over all the years as well as Jim Idle for solving the complex task of converting an OOP ANTLR target (Java) to a non OOP language (C), which is the foundation we build everything on.

ANTLR has got an own Google discussion group. Please join our discussion there about ANTLR in MySQL Workbench.

Universal Code Completion using ANTLR

$
0
0

ucc-data-structureWhile reworking our initial code completion implementation in MySQL Workbench I developed an approach that can potentially be applied for many different situations/languages where you need code completion. The current implementation is made for the needs of MySQL Workbench, but with some small refactorings you can move out the MySQL specific parts and have a clean core implementation that you can easily customize to your needs.

Since this implementation is not only bound to MySQL Workbench I posted the full description on my private blog.

Viewing all 14 articles
Browse latest View live