Search Results

Search found 48190 results on 1928 pages for 'mysql slow query log'.

Page 258/1928 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • How do you fix a MySQL “Incorrect key file” error when you can’t repair the table?

    - by Wayne M
    I'm trying to run a rather large query that is supposed to run nightly to populate a table. I'm getting an error saying Incorrect key file for table '/var/tmp/#sql_201e_0.MYI'; try to repair it but the storage engine I'm using (whatever the default is, I guess?) doesn't support repairing tables. how do I fix this so I can run the query? We are under pressure to get this table loaded for a client.

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • Cron doesn't execute one of the scheduled jobs

    - by user288633
    I'm using a lubuntu desktop, distribution Ubuntu 13.10, i686. This is my problem: in the job list scheduled by cron a job hasn't effect, but in /var/log/syslog its execution is traced. This is the relative log line: Jun 4 09:06:01 kiosk CRON[14189]: (root) CMD (/usr/bin/xinput set-prop 12 --type=float "Coordinate Transformation Matrix" 0 -1 1 1 0 0 0 0 1 /tmp/mybackup.log) This job should rotate touchscreen mapping. I try different solutions: I substitute in crontab the with bash -c "", I set "export DISPLAY=:0.0" ("for Graphics related job in Unix Environment we need to set first the DISPLAY...") before the command,...and many other! I know there are a lots of details affect cron execution (path, environment variables, special character and other) and I have no more idea by now :( Could some gentleman suggest me an idea? where can I find the problem? Thanks in advance!

    Read the article

  • Syntax for piping varnish logs to rotatelogs

    - by jetboy
    Ubuntu 12.04 Server x64, Varnish 3.0.2 I'm trying to pipe varnishncsa's logs through Apache's rotatelogs, and running from the shell, things work fine: sudo varnishncsa -a -P /var/run/varnishncsa/varnishncsa.pid |/usr/sbin/rotatelogs /var/log/varnish/varnish.log.%Y%m%d%H 3600 creates a new logfile in /var/log/varnish, with rotation every hour (3600 seconds). However, I'm struggling to get things working the same way inside /etc/init.d/varnishncsa: PATH=/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/bin/$NAME PIDFILE=/var/run/$NAME/$NAME.pid LOGFILE=/var/log/varnish/varnishncsa.log USER=varnishlog DAEMON_OPTS="-a -P ${PIDFILE}" DAEMON_PIPE="|/usr/sbin/rotatelogs /var/log/varnish/varnish.log.%Y%m%d%H 3600" ... start_varnishncsa() { output=$(/bin/tempfile -s.varnish) log_daemon_msg "Starting $DESC" "$NAME" create_pid_directory if start-stop-daemon --start --verbose --pidfile ${PIDFILE} \ --chuid $USER --exec ${DAEMON} -- ${DAEMON_OPTS} \ > ${output} 2>&1; then log_end_msg 0 else log_end_msg 1 cat $output exit 1 fi rm $output } Where should I put DAEMON_PIPE in the above code? I've tried at the end of: if start-stop-daemon --start --verbose --pidfile ${PIDFILE} which is where additional command line parameters usually go, but it isn't creating a logfile.

    Read the article

  • Can't log in to GNOME after upgrade (raring -> saucy)

    - by x-yuri
    I've just upgraded my ubuntu (raring to saucy) and I now can't log in to GNOME. As opposed to virtual consoles (Ctrl-Alt-F1, for example). I set it up to log in automatically. But it asks for password now. I type in the password, press Enter, the screen blinks and here I am again at the login screen. Then I looked into /var/log/Xorg.0.log: [ 33.956] Initializing built-in extension DRI2 [ 33.956] (II) LoadModule: "glx" [ 33.956] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 33.956] (II) Module glx: vendor="X.Org Foundation" [ 33.956] compiled for 1.14.3, module version = 1.0.0 [ 33.956] ABI class: X.Org Server Extension, version 7.0 [ 33.956] (==) AIGLX enabled [ 33.956] Loading extension GLX [ 33.956] (==) Matched fglrx as autoconfigured driver 0 [ 33.956] (==) Matched ati as autoconfigured driver 1 [ 33.956] (==) Matched fglrx as autoconfigured driver 2 [ 33.956] (==) Matched ati as autoconfigured driver 3 [ 33.956] (==) Matched vesa as autoconfigured driver 4 [ 33.956] (==) Matched modesetting as autoconfigured driver 5 [ 33.956] (==) Matched fbdev as autoconfigured driver 6 [ 33.956] (==) Assigned the driver to the xf86ConfigLayout [ 33.956] (II) LoadModule: "fglrx" [ 33.957] (WW) Warning, couldn't open module fglrx [ 33.957] (II) UnloadModule: "fglrx" [ 33.957] (II) Unloading fglrx [ 33.957] (EE) Failed to load module "fglrx" (module does not exist, 0) [ 33.957] (II) LoadModule: "ati" [ 33.957] (WW) Warning, couldn't open module ati [ 33.957] (II) UnloadModule: "ati" [ 33.957] (II) Unloading ati [ 33.957] (EE) Failed to load module "ati" (module does not exist, 0) [ 33.957] (II) LoadModule: "vesa" [ 33.957] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 33.957] (II) Module vesa: vendor="X.Org Foundation" [ 33.957] compiled for 1.14.1, module version = 2.3.2 [ 33.957] Module class: X.Org Video Driver [ 33.957] ABI class: X.Org Video Driver, version 14.1 [ 33.957] (II) LoadModule: "modesetting" [ 33.957] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 33.957] (II) Module modesetting: vendor="X.Org Foundation" [ 33.957] compiled for 1.14.1, module version = 0.8.0 [ 33.957] Module class: X.Org Video Driver [ 33.957] ABI class: X.Org Video Driver, version 14.1 [ 33.957] (II) LoadModule: "fbdev" [ 33.957] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 33.958] (II) Module fbdev: vendor="X.Org Foundation" [ 33.958] compiled for 1.14.1, module version = 0.4.3 [ 33.958] Module class: X.Org Video Driver [ 33.958] ABI class: X.Org Video Driver, version 14.1 [ 33.958] (==) Matched fglrx as autoconfigured driver 0 [ 33.958] (==) Matched ati as autoconfigured driver 1 [ 33.958] (==) Matched fglrx as autoconfigured driver 2 [ 33.958] (==) Matched ati as autoconfigured driver 3 [ 33.958] (==) Matched vesa as autoconfigured driver 4 [ 33.958] (==) Matched modesetting as autoconfigured driver 5 [ 33.958] (==) Matched fbdev as autoconfigured driver 6 [ 33.958] (==) Assigned the driver to the xf86ConfigLayout [ 33.958] (II) LoadModule: "fglrx" [ 33.958] (WW) Warning, couldn't open module fglrx [ 33.958] (II) UnloadModule: "fglrx" [ 33.958] (II) Unloading fglrx [ 33.958] (EE) Failed to load module "fglrx" (module does not exist, 0) [ 33.958] (II) LoadModule: "ati" [ 33.958] (WW) Warning, couldn't open module ati [ 33.958] (II) UnloadModule: "ati" [ 33.958] (II) Unloading ati [ 33.958] (EE) Failed to load module "ati" (module does not exist, 0) [ 33.958] (II) LoadModule: "vesa" [ 33.958] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 33.958] (II) Module vesa: vendor="X.Org Foundation" [ 33.958] compiled for 1.14.1, module version = 2.3.2 [ 33.958] Module class: X.Org Video Driver [ 33.958] ABI class: X.Org Video Driver, version 14.1 [ 33.958] (II) UnloadModule: "vesa" [ 33.958] (II) Unloading vesa [ 33.958] (II) Failed to load module "vesa" (already loaded, 0) [ 33.958] (II) LoadModule: "modesetting" [ 33.959] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 33.959] (II) Module modesetting: vendor="X.Org Foundation" [ 33.959] compiled for 1.14.1, module version = 0.8.0 [ 33.959] Module class: X.Org Video Driver [ 33.959] ABI class: X.Org Video Driver, version 14.1 [ 33.959] (II) UnloadModule: "modesetting" [ 33.959] (II) Unloading modesetting [ 33.959] (II) Failed to load module "modesetting" (already loaded, 0) [ 33.959] (II) LoadModule: "fbdev" [ 33.959] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 33.959] (II) Module fbdev: vendor="X.Org Foundation" [ 33.959] compiled for 1.14.1, module version = 0.4.3 [ 33.959] Module class: X.Org Video Driver [ 33.959] ABI class: X.Org Video Driver, version 14.1 [ 33.959] (II) UnloadModule: "fbdev" [ 33.959] (II) Unloading fbdev [ 33.959] (II) Failed to load module "fbdev" (already loaded, 0) [ 33.959] (II) VESA: driver for VESA chipsets: vesa [ 33.959] (II) modesetting: Driver for Modesetting Kernel Drivers: kms [ 33.959] (II) FBDEV: driver for framebuffer: fbdev [ 33.959] (++) using VT number 7 If I install fglrx, it reads: [ 37.152] Initializing built-in extension DRI2 [ 37.152] (II) LoadModule: "glx" [ 37.152] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/modules/extensions/libglx.so [ 37.152] (II) Module glx: vendor="Advanced Micro Devices, Inc." [ 37.152] compiled for 6.9.0, module version = 1.0.0 [ 37.152] Loading extension GLX [ 37.153] (==) Matched fglrx as autoconfigured driver 0 [ 37.153] (==) Matched ati as autoconfigured driver 1 [ 37.153] (==) Matched vesa as autoconfigured driver 2 [ 37.153] (==) Matched modesetting as autoconfigured driver 3 [ 37.153] (==) Matched fbdev as autoconfigured driver 4 [ 37.153] (==) Assigned the driver to the xf86ConfigLayout [ 37.153] (II) LoadModule: "fglrx" [ 37.153] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/modules/drivers/fglrx_drv.so [ 37.168] (II) Module fglrx: vendor="FireGL - AMD Technologies Inc." [ 37.168] compiled for 1.4.99.906, module version = 13.10.10 [ 37.168] Module class: X.Org Video Driver [ 37.168] (II) Loading sub module "fglrxdrm" [ 37.168] (II) LoadModule: "fglrxdrm" [ 37.168] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/modules/linux/libfglrxdrm.so [ 37.169] (II) Module fglrxdrm: vendor="FireGL - AMD Technologies Inc." [ 37.169] compiled for 1.4.99.906, module version = 13.10.10 [ 37.169] (II) LoadModule: "ati" [ 37.169] (WW) Warning, couldn't open module ati [ 37.169] (II) UnloadModule: "ati" [ 37.169] (II) Unloading ati [ 37.169] (EE) Failed to load module "ati" (module does not exist, 0) [ 37.169] (II) LoadModule: "vesa" [ 37.169] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 37.169] (II) Module vesa: vendor="X.Org Foundation" [ 37.169] compiled for 1.14.1, module version = 2.3.2 [ 37.169] Module class: X.Org Video Driver [ 37.169] ABI class: X.Org Video Driver, version 14.1 [ 37.169] (II) LoadModule: "modesetting" [ 37.170] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 37.170] (II) Module modesetting: vendor="X.Org Foundation" [ 37.170] compiled for 1.14.1, module version = 0.8.0 [ 37.170] Module class: X.Org Video Driver [ 37.170] ABI class: X.Org Video Driver, version 14.1 [ 37.170] (II) LoadModule: "fbdev" [ 37.170] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 37.170] (II) Module fbdev: vendor="X.Org Foundation" [ 37.170] compiled for 1.14.1, module version = 0.4.3 [ 37.170] Module class: X.Org Video Driver [ 37.170] ABI class: X.Org Video Driver, version 14.1 [ 37.170] (==) Matched fglrx as autoconfigured driver 0 [ 37.170] (==) Matched ati as autoconfigured driver 1 [ 37.170] (==) Matched vesa as autoconfigured driver 2 [ 37.170] (==) Matched modesetting as autoconfigured driver 3 [ 37.170] (==) Matched fbdev as autoconfigured driver 4 [ 37.170] (==) Assigned the driver to the xf86ConfigLayout [ 37.170] (II) LoadModule: "fglrx" [ 37.170] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/modules/drivers/fglrx_drv.so [ 37.170] (II) Module fglrx: vendor="FireGL - AMD Technologies Inc." [ 37.170] compiled for 1.4.99.906, module version = 13.10.10 [ 37.170] Module class: X.Org Video Driver [ 37.170] (II) LoadModule: "ati" [ 37.170] (WW) Warning, couldn't open module ati [ 37.170] (II) UnloadModule: "ati" [ 37.171] (II) Unloading ati [ 37.171] (EE) Failed to load module "ati" (module does not exist, 0) [ 37.171] (II) LoadModule: "vesa" [ 37.171] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 37.171] (II) Module vesa: vendor="X.Org Foundation" [ 37.171] compiled for 1.14.1, module version = 2.3.2 [ 37.171] Module class: X.Org Video Driver [ 37.171] ABI class: X.Org Video Driver, version 14.1 [ 37.171] (II) UnloadModule: "vesa" [ 37.171] (II) Unloading vesa [ 37.171] (II) Failed to load module "vesa" (already loaded, 0) [ 37.171] (II) LoadModule: "modesetting" [ 37.171] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 37.171] (II) Module modesetting: vendor="X.Org Foundation" [ 37.171] compiled for 1.14.1, module version = 0.8.0 [ 37.171] Module class: X.Org Video Driver [ 37.171] ABI class: X.Org Video Driver, version 14.1 [ 37.171] (II) UnloadModule: "modesetting" [ 37.171] (II) Unloading modesetting [ 37.171] (II) Failed to load module "modesetting" (already loaded, 0) [ 37.171] (II) LoadModule: "fbdev" [ 37.171] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 37.171] (II) Module fbdev: vendor="X.Org Foundation" [ 37.171] compiled for 1.14.1, module version = 0.4.3 [ 37.171] Module class: X.Org Video Driver [ 37.171] ABI class: X.Org Video Driver, version 14.1 [ 37.171] (II) UnloadModule: "fbdev" [ 37.171] (II) Unloading fbdev [ 37.171] (II) Failed to load module "fbdev" (already loaded, 0) [ 37.171] (II) AMD Proprietary Linux Driver Version Identifier:13.10.10 [ 37.171] (II) AMD Proprietary Linux Driver Release Identifier: UNSUPPORTED-13.101 [ 37.171] (II) AMD Proprietary Linux Driver Build Date: May 23 2013 15:49:35 [ 37.171] (II) VESA: driver for VESA chipsets: vesa [ 37.171] (II) modesetting: Driver for Modesetting Kernel Drivers: kms [ 37.171] (II) FBDEV: driver for framebuffer: fbdev [ 37.171] (++) using VT number 7 I did more installing/removing packages than that. There were a moment when it said: (EE) Failed to load /usr/lib64/xorg/modules/libglamoregl.so: /usr/lib64/xorg/modules/libglamoregl.so: undefined symbol: _glapi_tls_Context Also there is init: not found in ~/.xsession-errors: /usr/sbin/lightdm-session: 5: exec: init: not found Actually, I'm out of ideas. What about you? :)

    Read the article

  • codeIgniter: pass parameter to a select query from previous query

    - by krike
    I'm creating a little management tool for the browser game travian. So I select all the villages from the database and I want to display some content that's unique to each of the villages. But in order to query for those unique details I need to pass the id of the village. How should I do this? this is my code (controller): function members_area() { global $site_title; $this->load->model('membership_model'); if($this->membership_model->get_villages()) { $data['rows'] = $this->membership_model->get_villages(); $id = 1;//this should be dynamic, but how? if($this->membership_model->get_tasks($id)): $data['tasks'] = $this->membership_model->get_tasks($id); endif; } $data['title'] = $site_title." | Your account"; $data['main_content'] = 'account'; $this->load->view('template', $data); } and this is the 2 functions I'm using in the model: function get_villages() { $q = $this->db->get('villages'); if($q->num_rows() > 0) { foreach ($q->result() as $row) { $data[] = $row; } return $data; } } function get_tasks($id) { $this->db->select('name'); $this->db->from('tasks'); $this->db->where('villageid', $id); $q = $this->db->get(); if($q->num_rows() > 0) { foreach ($q->result() as $task) { $data[] = $task; } return $data; } } and of course the view: <?php foreach($rows as $r) : ?> <div class="village"> <h3><?php echo $r->name; ?></h3> <ul> <?php foreach($tasks as $task): ?> <li><?php echo $task->name; ?></li> <?php endforeach; ?> </ul> <?php echo anchor('site/add_village/'.$r->id.'', '+ add new task'); ?> </div> <?php endforeach; ?> ps: please do not remove the comment in the first block of code!

    Read the article

  • Rails app deployment challenge, not finding database table in production.log

    - by Stefan M
    I'm trying to setup PasswordPusher as my first ruby app ever. Building and running the webrick server as instructed in README works fine. It was only when I tried to add Apache ProxyPass and ProxyPassReverse that the page load slowed down to several minutes. So I gave mod_passenger a whirl but now it's unable to find the password table. Here's what I get in log/production.log. Started GET "/" for 10.10.2.13 at Sun Jun 10 08:07:19 +0200 2012 Processing by PasswordsController#new as HTML Completed 500 Internal Server Error in 1ms ActiveRecord::StatementInvalid (Could not find table 'passwords'): app/controllers/passwords_controller.rb:77:in `new' app/controllers/passwords_controller.rb:77:in `new' While in log/private.log I get a lot more output so here's just a snippet but it looks to me like it's working with the database. Edit: This was actually old log output, maybe from db:create. Migrating to AddUserToPassword (20120220172426) (0.3ms) ALTER TABLE "passwords" ADD "user_id" integer (0.0ms) PRAGMA index_list("passwords") (0.2ms) CREATE INDEX "index_passwords_on_user_id" ON "passwords" ("user_id") (0.7ms) INSERT INTO "schema_migrations" ("version") VALUES ('20120220172426') (0.1ms) select sqlite_version(*) (0.1ms) SELECT "schema_migrations"."version" FROM "schema_migrations" (0.0ms) PRAGMA index_list("passwords") (0.0ms) PRAGMA index_info('index_passwords_on_user_id') (4.6ms) PRAGMA index_list("rails_admin_histories") (0.0ms) PRAGMA index_info('index_rails_admin_histories') (0.0ms) PRAGMA index_list("users") (4.8ms) PRAGMA index_info('index_users_on_unlock_token') (0.0ms) PRAGMA index_info('index_users_on_reset_password_token') (0.0ms) PRAGMA index_info('index_users_on_email') (0.0ms) PRAGMA index_list("views") In my vhost I have it set to use RailsEnv private. <VirtualHost *:80> # ProxyPreserveHost on # # ProxyPass / http://10.220.100.209:180/ # ProxyPassReverse / http://10.220.100.209:180/ DocumentRoot /var/www/pwpusher/public <Directory /var/www/pwpusher/public> allow from all Options -MultiViews </Directory> RailsEnv private ServerName pwpush.intranet ErrorLog /var/log/apache2/error.log LogLevel debug CustomLog /var/log/apache2/access.log combined </VirtualHost> My passenger.conf in mods-enabled is default for Debian. <IfModule mod_passenger.c> PassengerRoot /usr PassengerRuby /usr/bin/ruby </IfModule> In the apache error.log I get something more cryptic to me. [Sun Jun 10 06:25:07 2012] [notice] Apache/2.2.16 (Debian) Phusion_Passenger/2.2.11 PHP/5.3.3-7+squeeze9 with Suhosin-Patch mod_ssl/2.2.16 OpenSSL/0.9.8o configured -- resuming normal operations /var/www/pwpusher/vendor/bundle/ruby/1.8/bundler/gems/modernizr-rails-09e9e6a92d67/lib/modernizr/rails/version.rb:3: warning: already initialized constant VERSION cache: [GET /] miss [Sun Jun 10 08:07:19 2012] [debug] mod_deflate.c(615): [client 10.10.2.13] Zlib: Compressed 728 to 423 : URL / /var/www/pwpusher/vendor/bundle/ruby/1.8/bundler/gems/modernizr-rails-09e9e6a92d67/lib/modernizr/rails/version.rb:3: warning: already initialized constant VERSION cache: [GET /] miss [Sun Jun 10 10:17:16 2012] [debug] mod_deflate.c(615): [client 10.10.2.13] Zlib: Compressed 728 to 423 : URL / Maybe that's routine stuff. I can see the rake command create files in the relative app root db/. I have private.sqlite3, production.sqlite3 among others. And here's my config/database.yml. base: &base adapter: sqlite3 timeout: 5000 development: database: db/development.sqlite3 <<: *base test: database: db/test.sqlite3 <<: *base private: database: db/private.sqlite3 <<: *base production: database: db/production.sqlite3 <<: *base I've tried setting absolute paths in it but that did not help.

    Read the article

  • MySQL query being performed when PHP if condition not met?

    - by Ryan
    The script I'm using is if($profile['username'] == $user['username']) { $db->query("UPDATE users SET newcomments = 0 WHERE username = '$user[username]'"); echo "This is a test"; } (Note that $db-query is exactly the same as mysql_query) For some very odd reason, the MySQL query is being performed even if the defined condition is false The "This is a test" works properly and only appears when the condition is met, but the MySQL query is performed anyway Whats the problem with it?

    Read the article

  • SQL SERVER – Introduction to Extended Events – Finding Long Running Queries

    - by pinaldave
    The job of an SQL Consultant is very interesting as always. The month before, I was busy doing query optimization and performance tuning projects for our clients, and this month, I am busy delivering my performance in Microsoft SQL Server 2005/2008 Query Optimization and & Performance Tuning Course. I recently read white paper about Extended Event by SQL Server MVP Jonathan Kehayias. You can read the white paper here: Using SQL Server 2008 Extended Events. I also read another appealing chapter by Jonathan in the book, SQLAuthority Book Review – Professional SQL Server 2008 Internals and Troubleshooting. After reading these excellent notes by Jonathan, I decided to upgrade my course and include Extended Event as one of the modules. This week, I have delivered Extended Events session two times and attendees really liked the said course. They really think Extended Events is one of the most powerful tools available. Extended Events can do many things. I suggest that you read the white paper I mentioned to learn more about this tool. Instead of writing a long theory, I am going to write a very quick script for Extended Events. This event session captures all the longest running queries ever since the event session was started. One of the many advantages of the Extended Events is that it can be configured very easily and it is a robust method to collect necessary information in terms of troubleshooting. There are many targets where you can store the information, which include XML file target, which I really like. In the following Events, we are writing the details of the event at two locations: 1) Ringer Buffer; and 2) XML file. It is not necessary to write at both places, either of the two will do. -- Extended Event for finding *long running query* IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='LongRunningQuery') DROP EVENT SESSION LongRunningQuery ON SERVER GO -- Create Event CREATE EVENT SESSION LongRunningQuery ON SERVER -- Add event to capture event ADD EVENT sqlserver.sql_statement_completed ( -- Add action - event property ACTION (sqlserver.sql_text, sqlserver.tsql_stack) -- Predicate - time 1000 milisecond WHERE sqlserver.sql_statement_completed.duration > 1000 ) -- Add target for capturing the data - XML File ADD TARGET package0.asynchronous_file_target( SET filename='c:\LongRunningQuery.xet', metadatafile='c:\LongRunningQuery.xem'), -- Add target for capturing the data - Ring Bugger ADD TARGET package0.ring_buffer (SET max_memory = 4096) WITH (max_dispatch_latency = 1 seconds) GO -- Enable Event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=START GO -- Run long query (longer than 1000 ms) SELECT * FROM AdventureWorks.Sales.SalesOrderDetail ORDER BY UnitPriceDiscount DESC GO -- Stop the event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=STOP GO -- Read the data from Ring Buffer SELECT CAST(dt.target_data AS XML) AS xmlLockData FROM sys.dm_xe_session_targets dt JOIN sys.dm_xe_sessions ds ON ds.Address = dt.event_session_address JOIN sys.server_event_sessions ss ON ds.Name = ss.Name WHERE dt.target_name = 'ring_buffer' AND ds.Name = 'LongRunningQuery' GO -- Read the data from XML File SELECT event_data_XML.value('(event/data[1])[1]','VARCHAR(100)') AS Database_ID, event_data_XML.value('(event/data[2])[1]','INT') AS OBJECT_ID, event_data_XML.value('(event/data[3])[1]','INT') AS object_type, event_data_XML.value('(event/data[4])[1]','INT') AS cpu, event_data_XML.value('(event/data[5])[1]','INT') AS duration, event_data_XML.value('(event/data[6])[1]','INT') AS reads, event_data_XML.value('(event/data[7])[1]','INT') AS writes, event_data_XML.value('(event/action[1])[1]','VARCHAR(512)') AS sql_text, event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS tsql_stack, CAST(event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS XML).value('(frame/@handle)[1]','VARCHAR(50)') AS handle FROM ( SELECT CAST(event_data AS XML) event_data_XML, * FROM sys.fn_xe_file_target_read_file ('c:\LongRunningQuery*.xet', 'c:\LongRunningQuery*.xem', NULL, NULL)) T GO -- Clean up. Drop the event DROP EVENT SESSION LongRunningQuery ON SERVER GO Just run the above query, afterwards you will find following result set. This result set contains the query that was running over 1000 ms. In our example, I used the XML file, and it does not reset when SQL services or computers restarts (if you are using DMV, it will reset when SQL services restarts). This event session can be very helpful for troubleshooting. Let me know if you want me to write more about Extended Events. I am totally fascinated with this feature, so I’m planning to acquire more knowledge about it so I can determine its other usages. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology Tagged: SQL Extended Events

    Read the article

  • Design by Contract with Microsoft .Net Code Contract

    - by Fredrik N
    I have done some talks on different events and summits about Defensive Programming and Design by Contract, last time was at Cornerstone’s Developer Summit 2010. Next time will be at SweNug (Sweden .Net User Group). I decided to write a blog post about of some stuffs I was talking about. Users are a terrible thing! Protect your self from them ”Human users have a gift for doing the worst possible thing at the worst possible time.” – Michael T. Nygard, Release It! The kind of users Michael T. Nygard are talking about is the users of a system. We also have users that uses our code, the users I’m going to focus on is the users of our code. Me and you and another developers. “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” – Martin Fowler Good programmers also writes code that humans know how to use, good programmers also make sure software behave in a predictable manner despise inputs or user actions. Design by Contract   Design by Contract (DbC) is a way for us to make a contract between us (the code writer) and the users of our code. It’s about “If you give me this, I promise to give you this”. It’s not about business validations, that is something completely different that should be part of the domain model. DbC is to make sure the users of our code uses it in a correct way, and that we can rely on the contract and write code in a way where we know that the users will follow the contract. It will make it much easier for us to write code with a contract specified. Something like the following code is something we may see often: public void DoSomething(Object value) { value.DoIKnowThatICanDoThis(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Where “value” can be uses directly or passed to other methods and later be used. What some of us can easily forget here is that the “value” can be “null”. We will probably not passing a null value, but someone else that uses our code maybe will do it. I think most of you (including me) have passed “null” into a method because you don’t know if the argument need to be specified to a valid value etc. I bet most of you also have got the “Null reference exception”. Sometimes this “Null reference exception” can be hard and take time to fix, because we need to search among our code to see where the “null” value was passed in etc. Wouldn’t it be much better if we can as early as possible specify that the value can’t not be null, so the users of our code also know it when the users starts to use our code, and before run time execution of the code? This is where DbC comes into the picture. We can use DbC to specify what we need, and by doing so we can rely on the contract when we write our code. So the code above can actually use the DoIKnowThatICanDoThis() method on the value object without being worried that the “value” can be null. The contract between the users of the code and us writing the code, says that the “value” can’t be null.   Pre- and Postconditions   When working with DbC we are specifying pre- and postconditions.  Precondition is a condition that should be met before a query or command is executed. An example of a precondition is: “The Value argument of the method can’t be null”, and we make sure the “value” isn’t null before the method is called. Postcondition is a condition that should be met when a command or query is completed, a postcondition will make sure the result is correct. An example of a postconditon is “The method will return a list with at least 1 item”. Commands an Quires When using DbC, we need to know what a Command and a Query is, because some principles that can be good to follow are based on commands and queries. A Command is something that will not return anything, like the SQL’s CREATE, UPDATE and DELETE. There are two kinds of Commands when using DbC, the Creation commands (for example a Constructor), and Others. Others can for example be a Command to add a value to a list, remove or update a value etc. //Creation commands public Stack(int size) //Other commands public void Push(object value); public void Remove(); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   A Query, is something that will return something, for example an Attribute, Property or a Function, like the SQL’s SELECT.   There are two kinds of Queries, the Basic Queries  (Quires that aren’t based on another queries), and the Derived Queries, queries that is based on another queries. Here is an example of queries of a Stack: //Basic Queries public int Count; public object this[int index] { get; } //Derived Queries //Is related to Count Query public bool IsEmpty() { return Count == 0; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To understand about some principles that are good to follow when using DbC, we need to know about the Commands and different Queries. The 6 Principles When working with DbC, it’s advisable to follow some principles to make it easier to define and use contracts. The following DbC principles are: Separate commands and queries. Separate basic queries from derived queries. For each derived query, write a postcondition that specifies what result will be returned, in terms of one or more basic queries. For each command, write a postcondition that specifies the value of every basic query. For every query and command, decide on a suitable precondition. Write invariants to define unchanging properties of objects. Before I will write about each of them I want you to now that I’m going to use .Net 4.0 Code Contract. I will in the rest of the post uses a simple Stack (Yes I know, .Net already have a Stack class) to give you the basic understanding about using DbC. A Stack is a data structure where the first item in, will be the first item out. Here is a basic implementation of a Stack where not contract is specified yet: public class Stack { private object[] _array; //Basic Queries public uint Count; public object this[uint index] { get { return _array[index]; } set { _array[index] = value; } } //Derived Queries //Is related to Count Query public bool IsEmpty() { return Count == 0; } //Is related to Count and this[] Query public object Top() { return this[Count]; } //Creation commands public Stack(uint size) { Count = 0; _array = new object[size]; } //Other commands public void Push(object value) { this[++Count] = value; } public void Remove() { this[Count] = null; Count--; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Note: The Stack is implemented in a way to demonstrate the use of Code Contract in a simple way, the implementation may not look like how you would implement it, so don’t think this is the perfect Stack implementation, only used for demonstration.   Before I will go deeper into the principles I will simply mention how we can use the .Net Code Contract. I mention before about pre- and postcondition, is about “Require” something and to “Ensure” something. When using Code Contract, we will use a static class called “Contract” and is located in he “System.Diagnostics.Contracts” namespace. The contract must be specified at the top or our member statement block. To specify a precondition with Code Contract we uses the Contract.Requires method, and to specify a postcondition, we uses the Contract.Ensure method. Here is an example where both a pre- and postcondition are used: public object Top() { Contract.Requires(Count > 0, "Stack is empty"); Contract.Ensures(Contract.Result<object>() == this[Count]); return this[Count]; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The contract above requires that the Count is greater than 0, if not we can’t get the item at the Top of a Stack. We also Ensures that the results (By using the Contract.Result method, we can specify a postcondition that will check if the value returned from a method is correct) of the Top query is equal to this[Count].   1. Separate Commands and Queries   When working with DbC, it’s important to separate Command and Quires. A method should either be a command that performs an Action, or returning information to the caller, not both. By asking a question the answer shouldn’t be changed. The following is an example of a Command and a Query of a Stack: public void Push(object value) public object Top() .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The Push is a command and will not return anything, just add a value to the Stack, the Top is a query to get the item at the top of the stack.   2. Separate basic queries from derived queries There are two different kinds of queries,  the basic queries that doesn’t rely on another queries, and derived queries that uses a basic query. The “Separate basic queries from derived queries” principle is about about that derived queries can be specified in terms of basic queries. So this principles is more about recognizing that a query is a derived query or a basic query. It will then make is much easier to follow the other principles. The following code shows a basic query and a derived query: //Basic Queries public uint Count; //Derived Queries //Is related to Count Query public bool IsEmpty() { return Count == 0; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   We can see that IsEmpty will use the Count query, and that makes the IsEmpty a Derived query.   3. For each derived query, write a postcondition that specifies what result will be returned, in terms of one or more basic queries.   When the derived query is recognize we can follow the 3ed principle. For each derived query, we can create a postcondition that specifies what result our derived query will return in terms of one or more basic queries. Remember that DbC is about contracts between the users of the code and us writing the code. So we can’t use demand that the users will pass in a valid value, we must also ensure that we will give the users what the users wants, when the user is following our contract. The IsEmpty query of the Stack will use a Count query and that will make the IsEmpty a Derived query, so we should now write a postcondition that specified what results will be returned, in terms of using a basic query and in this case the Count query, //Basic Queries public uint Count; //Derived Queries public bool IsEmpty() { Contract.Ensures(Contract.Result<bool>() == (Count == 0)); return Count == 0; } The Contract.Ensures is used to create a postcondition. The above code will make sure that the results of the IsEmpty (by using the Contract.Result to get the result of the IsEmpty method) is correct, that will say that the IsEmpty will be either true or false based on Count is equal to 0 or not. The postcondition are using a basic query, so the IsEmpty is now following the 3ed principle. We also have another Derived Query, the Top query, it will also need a postcondition and it uses all basic queries. The Result of the Top method must be the same value as the this[] query returns. //Basic Queries public uint Count; public object this[uint index] { get { return _array[index]; } set { _array[index] = value; } } //Derived Queries //Is related to Count and this[] Query public object Top() { Contract.Ensures(Contract.Result<object>() == this[Count]); return this[Count]; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   4. For each command, write a postcondition that specifies the value of every basic query.   For each command we will create a postconditon that specifies the value of basic queries. If we look at the Stack implementation we will have three Commands, one Creation command, the Constructor, and two others commands, Push and Remove. Those commands need a postcondition and they should include basic query to follow the 4th principle. //Creation commands public Stack(uint size) { Contract.Ensures(Count == 0); Count = 0; _array = new object[size]; } //Other commands public void Push(object value) { Contract.Ensures(Count == Contract.OldValue<uint>(Count) + 1); Contract.Ensures(this[Count] == value); this[++Count] = value; } public void Remove() { Contract.Ensures(Count == Contract.OldValue<uint>(Count) - 1); this[Count] = null; Count--; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   As you can see the Create command will Ensures that Count will be 0 when the Stack is created, when a Stack is created there shouldn’t be any items in the stack. The Push command will take a value and put it into the Stack, when an item is pushed into the Stack, the Count need to be increased to know the number of items added to the Stack, and we must also make sure the item is really added to the Stack. The postconditon of the Push method will make sure the that old value of the Count (by using the Contract.OldValue we can get the value a Query has before the method is called)  plus 1 will be equal to the Count query, this is the way we can ensure that the Push will increase the Count with one. We also make sure the this[] query will now contain the item we pushed into the Stack. The Remove method must make sure the Count is decreased by one when the top item is removed from the Stack. The Commands is now following the 4th principle, where each command now have a postcondition that used the value of basic queries. Note: The principle says every basic Query, the Remove only used one Query the Count, it’s because this command can’t use the this[] query because an item is removed, so the only way to make sure an item is removed is to just use the Count query, so the Remove will still follow the principle.   5. For every query and command, decide on a suitable precondition.   We have now focused only on postcondition, now time for some preconditons. The 5th principle is about deciding a suitable preconditon for every query and command. If we starts to look at one of our basic queries (will not go through all Queries and commands here, just some of them) the this[] query, we can’t pass an index that is lower then 1 (.Net arrays and list are zero based, but not the stack in this blog post ;)) and the index can’t be lesser than the number of items in the stack. So here we will need a preconditon. public object this[uint index] { get { Contract.Requires(index >= 1); Contract.Requires(index <= Count); return _array[index]; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Think about the Contract as an documentation about how to use the code in a correct way, so if the contract could be specified elsewhere (not part of the method body), we could simply write “return _array[index]” and there is no need to check if index is greater or lesser than Count, because that is specified in a “contract”. The implementation of Code Contract, requires that the contract is specified in the code. As a developer I would rather have this contract elsewhere (Like Spec#) or implemented in a way Eiffel uses it as part of the language. Now when we have looked at one Query, we can also look at one command, the Remove command (You can see the whole implementation of the Stack at the end of this blog post, where precondition is added to more queries and commands then what I’m going to show in this section). We can only Remove an item if the Count is greater than 0. So we can write a precondition that will require that Count must be greater than 0. public void Remove() { Contract.Requires(Count > 0); Contract.Ensures(Count == Contract.OldValue<uint>(Count) - 1); this[Count] = null; Count--; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   6. Write invariants to define unchanging properties of objects.   The last principle is about making sure the object are feeling great! This is done by using invariants. When using Code Contract we can specify invariants by adding a method with the attribute ContractInvariantMethod, the method must be private or public and can only contains calls to Contract.Invariant. To make sure the Stack feels great, the Stack must have 0 or more items, the Count can’t never be a negative value to make sure each command and queries can be used of the Stack. Here is our invariant for the Stack object: [ContractInvariantMethod] private void ObjectInvariant() { Contract.Invariant(Count >= 0); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Note: The ObjectInvariant method will be called every time after a Query or Commands is called. Here is the full example using Code Contract:   public class Stack { private object[] _array; //Basic Queries public uint Count; public object this[uint index] { get { Contract.Requires(index >= 1); Contract.Requires(index <= Count); return _array[index]; } set { Contract.Requires(index >= 1); Contract.Requires(index <= Count); _array[index] = value; } } //Derived Queries //Is related to Count Query public bool IsEmpty() { Contract.Ensures(Contract.Result<bool>() == (Count == 0)); return Count == 0; } //Is related to Count and this[] Query public object Top() { Contract.Requires(Count > 0, "Stack is empty"); Contract.Ensures(Contract.Result<object>() == this[Count]); return this[Count]; } //Creation commands public Stack(uint size) { Contract.Requires(size > 0); Contract.Ensures(Count == 0); Count = 0; _array = new object[size]; } //Other commands public void Push(object value) { Contract.Requires(value != null); Contract.Ensures(Count == Contract.OldValue<uint>(Count) + 1); Contract.Ensures(this[Count] == value); this[++Count] = value; } public void Remove() { Contract.Requires(Count > 0); Contract.Ensures(Count == Contract.OldValue<uint>(Count) - 1); this[Count] = null; Count--; } [ContractInvariantMethod] private void ObjectInvariant() { Contract.Invariant(Count >= 0); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Summary By using Design By Contract we can make sure the users are using our code in a correct way, and we must also make sure the users will get the expected results when they uses our code. This can be done by specifying contracts. To make it easy to use Design By Contract, some principles may be good to follow like the separation of commands an queries. With .Net 4.0 we can use the Code Contract feature to specify contracts.

    Read the article

  • Transitioning from Oracle based CMS to MySQL based CMS

    - by KM01
    We're looking at a replacement for our CMS which runs on Oracle. The new CMSes that we've looked at can in theory run on Oracle, but most of the vendor's installs run off of MySQL vendor supports install of their CMS on MySQL, and a "theoretical" install on Oracle the vendor's dev shops use MySQL none of them develop/test against Oracle Our DBA team works exclusively with Oracle, and doesn't have the bandwidth to provide additional support for a highly available and performing MySQL setup. They could in theory go to training and get ramped up, but our time line is also short (surprise!). So ... I guess my question(s) are: If you've seen a situation like this, how have you dealt with it? What tipped the balance either way? What type of effort did it take? If you're to do it over, what would you do differently ... ? Thanks! KM

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #038

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 CASE Statement in ORDER BY Clause – ORDER BY using Variable This article is as per request from the Application Development Team Leader of my company. His team encountered code where the application was preparing string for ORDER BY clause of the SELECT statement. Application was passing this string as variable to Stored Procedure (SP) and SP was using EXEC to execute the SQL string. This is not good for performance as Stored Procedure has to recompile every time due to EXEC. sp_executesql can do the same task but still not the best performance. SSMS – View/Send Query Results to Text/Grid/Files Results to Text – CTRL + T Results to Grid – CTRL + D Results to File – CTRL + SHIFT + F 2008 Introduction to SPARSE Columns Part 2 I wrote about Introduction to SPARSE Columns Part 1. Let us understand the concept of the SPARSE column in more detail. I suggest you read the first part before continuing reading this article. All SPARSE columns are stored as one XML column in the database. Let us see some of the advantage and disadvantage of SPARSE column. Deferred Name Resolution How come when table name is incorrect SP can be created successfully but when an incorrect column is used SP cannot be created? 2009 Backup Timeline and Understanding of Database Restore Process in Full Recovery Model In general, databases backup in full recovery mode is taken in three different kinds of database files. Full Database Backup Differential Database Backup Log Backup Restore Sequence and Understanding NORECOVERY and RECOVERY While doing RESTORE Operation if you restoring database files, always use NORECOVER option as that will keep the database in a state where more backup file are restored. This will also keep database offline also to prevent any changes, which can create itegrity issues. Once all backup file is restored run RESTORE command with a RECOVERY option to get database online and operational. Four Different Ways to Find Recovery Model for Database Perhaps, the best thing about technical domain is that most of the things can be executed in more than one ways. It is always useful to know about the various methods of performing a single task. Two Methods to Retrieve List of Primary Keys and Foreign Keys of Database When Information Schema is used, we will not be able to discern between primary key and foreign key; we will have both the keys together. In the case of sys schema, we can query the data in our preferred way and can join this table to another table, which can retrieve additional data from the same. Get Last Running Query Based on SPID PID is returns sessions ID of the current user process. The acronym SPID comes from the name of its earlier version, Server Process ID. 2010 SELECT * FROM dual – Dual Equivalent Dual is a table that is created by Oracle together with data dictionary. It consists of exactly one column named “dummy”, and one record. The value of that record is X. You can check the content of the DUAL table using the following syntax. SELECT * FROM dual Identifying Statistics Used by Query Someone asked this question in my training class of query optimization and performance tuning.  “Can I know which statistics were used by my query?” 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 14 of 31 What are the basic functions for master, msdb, model, tempdb and resource databases? What is the Maximum Number of Index per Table? Explain Few of the New Features of SQL Server 2008 Management Studio Explain IntelliSense for Query Editing Explain MultiServer Query Explain Query Editor Regions Explain Object Explorer Enhancements Explain Activity Monitors SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 15 of 31 What is Service Broker? Where are SQL server Usernames and Passwords Stored in the SQL server? What is Policy Management? What is Database Mirroring? What are Sparse Columns? What does TOP Operator Do? What is CTE? What is MERGE Statement? What is Filtered Index? Which are the New Data Types Introduced in SQL SERVER 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 16 of 31 What are the Advantages of Using CTE? How can we Rewrite Sub-Queries into Simple Select Statements or with Joins? What is CLR? What are Synonyms? What is LINQ? What are Isolation Levels? What is Use of EXCEPT Clause? What is XPath? What is NOLOCK? What is the Difference between Update Lock and Exclusive Lock? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 17 of 31 How will you Handle Error in SQL SERVER 2008? What is RAISEERROR? What is RAISEERROR? How to Rebuild the Master Database? What is the XML Datatype? What is Data Compression? What is Use of DBCC Commands? How to Copy the Tables, Schema and Views from one SQL Server to Another? How to Find Tables without Indexes? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 18 of 31 How to Copy Data from One Table to Another Table? What is Catalog Views? What is PIVOT and UNPIVOT? What is a Filestream? What is SQLCMD? What do you mean by TABLESAMPLE? What is ROW_NUMBER()? What are Ranking Functions? What is Change Data Capture (CDC) in SQL Server 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 19 of 31 How can I Track the Changes or Identify the Latest Insert-Update-Delete from a Table? What is the CPU Pressure? How can I Get Data from a Database on Another Server? What is the Bookmark Lookup and RID Lookup? What is Difference between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE? What is Difference between GETDATE and SYSDATETIME in SQL Server 2008? How can I Check that whether Automatic Statistic Update is Enabled or not? How to Find Index Size for Each Index on Table? What is the Difference between Seek Predicate and Predicate? What are Basics of Policy Management? What are the Advantages of Policy Management? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 20 of 31 What are Policy Management Terms? What is the ‘FILLFACTOR’? Where in MS SQL Server is ’100’ equal to ‘0’? What are Points to Remember while Using the FILLFACTOR Argument? What is a ROLLUP Clause? What are Various Limitations of the Views? What is a Covered index? When I Delete any Data from a Table, does the SQL Server reduce the size of that table? What are Wait Types? How to Stop Log File Growing too Big? If any Stored Procedure is Encrypted, then can we see its definition in Activity Monitor? 2012 Example of Width Sensitive and Width Insensitive Collation Width Sensitive Collation: A single-byte character (half-width) represented as single-byte and the same character represented as a double-byte character (full-width) are when compared are not equal the collation is width sensitive. In this example we have one table with two columns. One column has a collation of width sensitive and the second column has a collation of width insensitive. Find Column Used in Stored Procedure – Search Stored Procedure for Column Name Very interesting conversation about how to find column used in a stored procedure. There are two different characters in the story and both are having a conversation about how to find column in the stored procedure. Here are two part story Part 1 | Part 2 SQL SERVER – 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video In simple words, in many cases the database move from one place to another place. It is not always possible to back up and restore databases. There are possibilities when only part of the database (with schema and data) has to be moved. In this video we learn that we can easily generate script for schema for data and move from one server to another one. INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1 I often see the value -1 in the CHARACTER_MAXIMUM_LENGTH column of INFORMATION_SCHEMA.COLUMNS table. I understand that the length of any column can be between 0 to large number but I do not get it when I see value in negative (i.e. -1). Any insight on this subject? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Introduction to PERCENTILE_DISC() – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical function PERCENTILE_DISC(). The book online gives following definition of this function: Computes a specific percentile for sorted values in an entire rowset or within distinct partitions of a rowset in Microsoft SQL Server 2012 Release Candidate 0 (RC 0). For a given percentile value P, PERCENTILE_DISC sorts the values of the expression in the ORDER BY clause and returns the value with the smallest CUME_DIST value (with respect to the same sort specification) that is greater than or equal to P. If you are clear with understanding of the function – no need to read further. If you got lost here is the same in simple words – find value of the column which is equal or more than CUME_DIST. Before you continue reading this blog I strongly suggest you read about CUME_DIST function over here Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist, PERCENTILE_DISC(0.5) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS PercentileDisc FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: You can see that I have used PERCENTILE_DISC(0.5) in query, which is similar to finding median but not exactly. PERCENTILE_DISC() function takes a percentile as a passing parameters. It returns the value as answer which value is equal or great to the percentile value which is passed into the example. For example in above example we are passing 0.5 into the PERCENTILE_DISC() function. It will go through the resultset and identify which rows has values which are equal to or great than 0.5. In first example it found two rows which are equal to 0.5 and the value of ProductID of that row is the answer of PERCENTILE_DISC(). In some third windowed resultset there is only single row with the CUME_DIST() value as 1 and that is for sure higher than 0.5 making it as a answer. To make sure that we are clear with this example properly. Here is one more example where I am passing 0.6 as a percentile. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist, PERCENTILE_DISC(0.6) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS PercentileDisc FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: The result of the PERCENTILE_DISC(0.6) is ProductID of which CUME_DIST() is more than 0.6. This means for SalesOrderID 43670 has row with CUME_DIST() 0.75 is the qualified row, resulting answer 773 for ProductID. I hope this explanation makes it further clear. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • On MySQL 5.1 for Windows, why can't I assign DBA role to the "root" user?

    - by djangofan
    On MySQL 5.1 for Windows, why can't I assign DBA role to "root" user? The MySQL Workbench allows me to add all the other roles except for DBA. Also, when I "alter schema" on any table, while logged in as root, I dont see all the tabs that show me all the database properties... I only see the first tab that allows me to change collation only. What is wrong with this picture? How do i give root all priveleges? I've tried a few variations of GRANT ALL PRIVILEGES etc. from the command line but nothing works. My root account is unable to alter column names, indexes, or options of any given table that I create. I can create tables and delete them but I can't alter them.

    Read the article

  • Why your Netapp is so slow...

    - by Darius Zanganeh
    Have you ever wondered why your Netapp FAS box is slow and doesn't perform well at large block workloads?  In this blog entry I will give you a little bit of information that will probably help you understand why it’s so slow, why you shouldn't use it for applications that read and write in large blocks like 64k, 128k, 256k ++ etc..  Of course since I work for Oracle at this time, I will show you why the ZS3 storage boxes are excellent choices for these types of workloads. Netapp’s Fundamental Problem The fundamental problem you have running these workloads on Netapp is the backend block size of their WAFL file system.  Every application block on a Netapp FAS ends up in a 4k chunk on a disk. Reference:  Netapp TR-3001 Whitepaper Netapp has proven this lacking large block performance fact in at least two different ways. They have NEVER posted an SPC-2 Benchmark yet they have posted SPC-1 and SPECSFS, both recently. In 2011 they purchased Engenio to try and fill this GAP in their portfolio. Block Size Matters So why does block size matter anyways?  Many applications use large block chunks of data especially in the Big Data movement.  Some examples are SAS Business Analytics, Microsoft SQL, Hadoop HDFS is even 64MB! Now let me boil this down for you.  If an application such MS SQL is writing data in a 64k chunk then before Netapp actually writes it on disk it will have to split it into 16 different 4k writes and 16 different disk IOPS.  When the application later goes to read that 64k chunk the Netapp will have to again do 16 different disk IOPS.  In comparison the ZS3 Storage Appliance can write in variable block sizes ranging from 512b to 1MB.  So if you put the same MSSQL database on a ZS3 you can set the specific LUNs for this database to 64k and then when you do an application read/write it requires only a single disk IO.  That is 16x faster!  But, back to the problem with your Netapp, you will VERY quickly run out of disk IO and hit a wall.  Now all arrays will have some fancy pre fetch algorithm and some nice cache and maybe even flash based cache such as a PAM card in your Netapp but with large block workloads you will usually blow through the cache and still need significant disk IO.  Also because these datasets are usually very large and usually not dedupable they are usually not good candidates for an all flash system.  You can do some simple math in excel and very quickly you will see why it matters.  Here are a couple of READ examples using SAS and MSSQL.  Assume these are the READ IOPS the application needs even after all the fancy cache and algorithms.   Here is an example with 128k blocks.  Notice the numbers of drives on the Netapp! Here is an example with 64k blocks You can easily see that the Oracle ZS3 can do dramatically more work with dramatically less drives.  This doesn't even take into account that the ONTAP system will likely run out of CPU way before you get to these drive numbers so you be buying many more controllers.  So with all that said, lets look at the ZS3 and why you should consider it for any workload your running on Netapp today.  ZS3 World Record Price/Performance in the SPC-2 benchmark ZS3-2 is #1 in Price Performance $12.08ZS3-2 is #3 in Overall Performance 16,212 MBPS Note: The number one overall spot in the world is held by an AFA 33,477 MBPS but at a Price Performance of $29.79.  A customer could purchase 2 x ZS3-2 systems in the benchmark with relatively the same performance and walk away with $600,000 in their pocket.

    Read the article

  • Hardening network with sysctl settings made Wi-fi downloading speed extremely slow

    - by Rohit Bansal
    I just followed up following steps to harden network security The /etc/sysctl.conf file contain all the sysctl settings. Prevent source routing of incoming packets and log malformed IP's enter the following in a terminal window: sudo vi /etc/sysctl.conf Edit the `/etc/sysctl.conf` file and un-comment or add the following lines : # IP Spoofing protection net.ipv4.conf.all.rp_filter = 1 net.ipv4.conf.default.rp_filter = 1 # Ignore ICMP broadcast requests net.ipv4.icmp_echo_ignore_broadcasts = 1 # Disable source packet routing net.ipv4.conf.all.accept_source_route = 0 net.ipv6.conf.all.accept_source_route = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv6.conf.default.accept_source_route = 0 # Ignore send redirects net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 # Block SYN attacks net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 2048 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 5 # Log Martians net.ipv4.conf.all.log_martians = 1 net.ipv4.icmp_ignore_bogus_error_responses = 1 # Ignore ICMP redirects net.ipv4.conf.all.accept_redirects = 0 net.ipv6.conf.all.accept_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv6.conf.default.accept_redirects = 0 # Ignore Directed pings net.ipv4.icmp_echo_ignore_all = 1 To reload sysctl with the latest changes, enter: sudo sysctl -p But, after applying the changes I found "Wi-fi" downloading speed and terminal downloading speed extremely slow (less than 1KB/s) however surfing speed through browser was good. But, using direct ethernet cable was giving a good speed. Then, I reverted back the above changes and things fall back in line once again.... Could you please let me know what possibly in above script is affecting such behaviour [and why] ? How could I still maintain hardening of network security without disturbing Wi-fi downloading speed ?

    Read the article

  • best practice? Consumer data in MySQL on Amazon EBS (Elastic block store)

    - by jeff7091
    This is a consumer app, so I will care about storage costs - I don't want to have 5x copies of data lying about. The app shards very well, so I can use MySQL and not have scaling issues. Amazon EBS has a nice baseline+snapshot backup capability that uses S3. This should have a light footprint (in terms of storage cost). BUT: the magnolia.com story scares the crap out of me: basically flawless block-level backup of a corrupt DB or filesystem. Is there anything that is nearly as storage efficient as EBS at the MySQL level?

    Read the article

  • To Do list for multiple users using MySQL, Need some advice regarding Projectwork?

    - by Steve
    i am thinking to create a To Do list sort of thing for my Project work. it's sort of a To do list type thing, meant for Multiple Users. i mean each user will have his Login and Password , to access into his account or profile and from there he/she can Manage his own To-Do list. it's has to be Kind of like Remember the Milk sort of. Every user will have his own To Do list. i mean now as Different users will have Different To Dos to perform and so different To Dos Lists, if for data fields like in tabular form. Task || Priority || Deadline || Number of days required ||Status. -----||----------||----------||-------------------------||-------------------- -----||----------||----------||-------------------------||-------------------- -----||----------||----------||-------------------------||-------------------- So what i meant to ask , Can this type of thing be done using MySQL as database and and any web based server side language PHP, ASP, JSP. i mean Can this be done through RDBMS like MySQL, for here different member users will have different to do lists than each others to keep and maintain.

    Read the article

  • Checking if your SIMPLE databases need a log backup

    - by Fatherjack
    Hopefully you have read the blog by William Durkin explaining why your SIMPLE databases need a log backup in some cases. There is a SQL Server bug that means in some cases databases are marked as being in SIMPLE RECOVERY but have a log wait type that shows they are not properly configured. Please read his blog for the full explanation and a great description of how to reproduce the issue. As part of our (William happens to be my Boss) work to recover our affected databases I wrote this small PowerShell script to quickly check our servers for databases that needed the attention that William details.  cls $Servers = “Server01″,”Server02″,”etc”,”etc” foreach($Server in $Servers){ write-host “************” $server “****************”     $server = New-Object Microsoft.sqlserver.management.smo.server $Server     foreach($db in $Server.databases){         $db | where {$_.RecoveryModel -eq “Simple” -and $_.logreusewaitstatus -ne “nothing”} | select name, LogReuseWaitStatus     } } If you get any results from this query then you should consult Williams blog for the details on what action you should take. This script does give out false positives if in some circumstances depending on how busy your databases are. Hopefully this will let you check your servers quickly and if you find any problems you can reference Williams blog to understand what you need to do.

    Read the article

  • DRBD and MySQL - Heartbeat Setup

    Heartbeat automates all the moving parts and can work as well with the MySQL master-master active/passive solution as well as it can with the MySQL & DRBD solution. It manages the virtual IP address used by the database, directs DRBD to become primary, or relinquish primary duties, mounts the /dev/drbd0 device, and starts/stops MySQL as needed.

    Read the article

  • DRBD and MySQL - Heartbeat Setup

    Heartbeat automates all the moving parts and can work as well with the MySQL master-master active/passive solution as well as it can with the MySQL & DRBD solution. It manages the virtual IP address used by the database, directs DRBD to become primary, or relinquish primary duties, mounts the /dev/drbd0 device, and starts/stops MySQL as needed.

    Read the article

  • Apache and MySQL taking all the memory? Maximum connections?

    - by lpfavreau
    I've a had one of our servers going down (network wise) but keeping its uptime (so looks the server is not losing its power) recently. I've asked my hosting company to investigate and I've been told, after investigation, that Apache and MySQL were at all time using 80% of the memory and peaking at 95% and that I might be needing to add some more RAM to the server. One of their justifications to adding more RAM was that I was using the default max connections setting (125 for MySQL and 150 for Apache) and that for handling those 150 simultaneous connections, I would need at least 3Gb of memory instead of the 1Gb I have at the moment. Now, I understand that tweaking the max connections might be better than me leaving the default setting although I didn't feel it was a concern at the moment, having had servers with the same configuration handle more traffic than the current 1 or 2 visitors before the lunch, telling myself I'd tweak it depending on the visits pattern later. I've also always known Apache was more memory hungry under default settings than its competitor such as nginx and lighttpd. Nonetheless, looking at the stats of my machine, I'm trying to see how my hosting company got those numbers. I'm getting: # free -m total used free shared buffers cached Mem: 1000 944 56 0 148 725 -/+ buffers/cache: 71 929 Swap: 1953 0 1953 Which I guess means that yes, the server is reserving around 95% of its memory at the moment but I also thought it meant that only 71 out of the 1000 total were really used by the applications at the moment looking a the buffers/cache row. Also I don't see any swapping: # vmstat 60 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 57612 151704 742596 0 0 1 1 3 11 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 1 1 24 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 2 1 18 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 0 1 13 0 0 100 0 And finally, while requesting a page: top - 08:33:19 up 3 days, 13:11, 2 users, load average: 0.06, 0.02, 0.00 Tasks: 81 total, 1 running, 80 sleeping, 0 stopped, 0 zombie Cpu(s): 1.3%us, 0.3%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1024616k total, 976744k used, 47872k free, 151716k buffers Swap: 2000052k total, 0k used, 2000052k free, 742596k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24914 www-data 20 0 26296 8640 3724 S 2 0.8 0:00.06 apache2 23785 mysql 20 0 125m 18m 5268 S 1 1.9 0:04.54 mysqld 24491 www-data 20 0 25828 7488 3180 S 1 0.7 0:00.02 apache2 1 root 20 0 2844 1688 544 S 0 0.2 0:01.30 init ... So, I'd like to know, experts of serverfault: Do I really need more RAM at the moment? How do they calculate that for 150 simultaneous connections I'd need 3Gb? Thanks for your help!

    Read the article

  • MySQL Enterprise Backup 3.8.2 - Overview

    - by Priya Jayakumar
      MySQL Enterprise Backup (MEB) is the ideal solution for backing up MySQL databases. MEB 3.8.2 is released in June 2013. MySQL Enterprise Backup 3.8.2 release’s main goal is to improve usability. With this release, users can know the progress of backup completed both in terms of size and as a percentage of the total. This release also offers options to be able to manage the behavior of MEB in case the space on the secondary storage is completely exhausted during backup. The progress indicator is a (short) string that indicates how far the execution of a time-consuming MEB command has progressed. It consists of one or more "meters" that measures the progress of the command. There are two options introduced to control the progress reporting function of mysqlbackup command (1) –show-progress (2) –progress-interval. The user can control the progress indicator by using “--show-progress” option in any of the MEB operations. This option instructs MEB to output periodically short reports on the progress of time-consuming commands. The argument of this option instructs where the output could be sent. For example it could be stderr, stdout, file, fifo and table. With the “--show-progress” option both the total size of the backup to be copied and the size that’s already copied will be shown. Along with this, the state of the operation for example data or meta-data being copied or tables being locked and other such operations will also be reported. This gives more clear information to the DBA on the progress of the backup that’s happening. Interval between progress report in seconds is controlled by “--progress-interval” option. For more information on this please refer progress-report-options. MEB can also be accessed through GUI from MySQL WorkBench’s next version. This can be used as the front end interface for MEB users to perform backup operations at the click of a button. This feature was highly requested by DBAs and will be very useful. Refer http://insidemysql.com/mysql-workbench-6-0-a-sneak-preview/ for WorkBench upcoming release info. Along with the progress report feature some of the important issues like below are also addressed in MEB 3.8.2. In MEB 3.8.2 a new command line option “--on-disk-full” is introduced to abort or warn the user when a backup process encounters a full disk condition. When no option is given, by default it would abort. A few issues related to “incremental-backup” are also addressed in this release. Please refer 3.8.2 documentation for more details. It would be good for MEB users to move to 3.8.2 to take incremental backups. Overall the added usability and the important defects fixed in this release makes MySQL Enterprise Backup 3.8.2 a promising release.  

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >