Search Results

Search found 11313 results on 453 pages for 'joel day'.

Page 213/453 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Regex for date.

    - by Harikrishna
    What should be the regex for matching date of any format like 26FEB2009 30 Jul 2009 27 Mar 2008 29/05/2008 27 Aug 2009 What should be the regular expression for that ? Edit I have regex that matches with 26-Feb-2009 and 26 FEB 2009 with but not with 26FEB2009. So if any one know then please update it. (?:^|[^\d\w:])(?'day'\d{1,2})(?:-?st\s+|-?th\s+|-?rd\s+|-?nd\s+|-|\s+)(?'month'Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[uarychilestmbro]*(?:\s*,?\s*|-)(?:'?(?'year'\d{2})|(?'year'\d{4}))(?=$|[^\d\w])

    Read the article

  • ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)

    - by Imran
    Can someone please help as i've spent all day trying to fix this. I installed the latest XAMPP and now i can't connect to mysql from terminal.I checked my .profile file and the PATH seems ok. Does anyone know whats happened and what's the solution? PATH=$PATH:/Applications/XAMPP/xamppfiles/bin export PATH THIS IS A PROGRAMMING QUESTION AS I'M A PHP DEVELOPER TRYING TO DO MY JOB! Thankyou soo much in advance;-)

    Read the article

  • MySQL query from subquery not working

    - by James Goodwin
    I am trying to return a number based on the count of results from a table and to avoid having to count the results twice in the IF statement I am using a subquery. However I get a syntax error when trying to run the query, the subquery I have tested by itself runs fine. Any ideas what is wrong with the query? The syntax looks correct to me SELECT IF(daily_count>8000,0,IF(daily_count>6000,1,2)) FROM ( SELECT count(*) as daily_count FROM orders201003 WHERE DATE_FORMAT(date_sub(curdate(), INTERVAL 1 DAY),"%d-%m-%y") = DATE_FORMAT(reqDate,"%d-%m-%y") ) q

    Read the article

  • ExpectedExceptionAttribute is not working in MSTest

    - by Micah
    This is wierd, but all of a sudden the `ExpectedExceptionAttribute' quit working for me the other day. Not sure what's gone wrong. I'm running VS 2010 and VS 2005 side-by-side. It's not working in VS 2010. This test should pass, however it is failing: [TestMethod] [ExpectedException(typeof(ArgumentNullException))] public void Test_Exception() { throw new ArgumentNullException("test"); } Any ideas? This really sux.

    Read the article

  • Cartesian product in Scheme

    - by John Retallack
    I've been trying to do a function that returns the Cartesian Product of n sets,in Dr Scheme,the sets are given as a list of lists,I've been stuck at this all day,I would like a few guidelines as where to start,I've wrote a pice of code but it dosen't work. (define cart-n(?(l) (if (null? l) '(()) (map (?(lst) (cons (car ( car(l))) lst)) (cart-n (cdr l) )))))

    Read the article

  • Wix CopyFile only on target machine

    - by Burt
    I need to be able to copy a file that exists on the target machines hard-drive based on a registry setting that holds the folder path. I have been trying to get this going for a day or two and am having difficulty, can anyone help? Thanks, B

    Read the article

  • SQL: Daily Average of Logins Per Hour

    - by jerrygarciuh
    This query is producing counts of logins per hour: SELECT DATEADD(hour, DATEDIFF(hour, 0, EVENT_DATETIME), 0), COUNT(*) FROM EVENTS_ALL_RPT_V1 WHERE EVENT_NAME = 'Login' AND EVENT_DATETIME >= CONVERT(DATETIME, '2010-03-17 00:00:00', 120) AND EVENT_DATETIME <= CONVERT(DATETIME, '2010-03-24 00:00:00', 120) GROUP BY DATEADD(hour, DATEDIFF(hour, 0, EVENT_DATETIME), 0) ORDER BY DATEADD(hour, DATEDIFF(hour, 0, EVENT_DATETIME), 0) ...with lots of results like this: Datetime COUNT(*) ---------------------------------- 2010-03-17 12:00:00.000 135 2010-03-17 13:00:00.000 129 2010-03-17 14:00:00.000 147 What I need to figure out is how to query the average logins per hour for a given day. Any help?

    Read the article

  • What is the best keyboard/mouse for ergonomics or to prevent wrist pain?

    - by Steve Duitsman
    I have had pain in my wrists in the past, and as someone who types all day, I was wondering what are some keyboards or mice that have helped for this sort of pain. Update: Many answers have recommended examining chairs/desks for ergonomics. As someone who isn't able to work from home and therefore doesn't have a lot of control over this; is ordering my own chair/desk (whether my employer or I purchase it) a solution that's really realistic?

    Read the article

  • Localize jQuery Date Picker using the Zend Framework

    - by Matijs
    I am trying to create a datepicker using a different locale than English. According to the jQuery manual I need to add the line: $("#datepicker").datepicker($.datepicker.regional['fr']); to $view->jquery()->addOnLoad(). The code is output and runs without any errors, but the month and day names are still in English. Is there some simpler option or what am I overlooking.

    Read the article

  • "tracing" version of readlink(1)

    - by jonrock
    I would like a version of "readlink -f" that provides a trace of every individual symlink resolution it performs. Something like: $ linktrace /usr/lib64/sendmail /usr/lib64 -> lib /usr/lib/sendmail -> ../sbin/sendmail /usr/sbin/sendmail $ I know I have used this utility in the past, on linux, and also remember at the time thinking "the name of this tool is completely unintuitive and I will forget it". Well, that day has arrived.

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • Get more records that appear more than once

    - by milo2010
    How can I see all the records that appear more than once per day? I have this table: ID Name Date 1 John 27.03.2010 18:17:00 2 Mike 27.03.2010 16:38:00 3 Sonny 28.03.2010 20:23:00 4 Anna 29.03.2010 13:51:00 5 Maria 29.03.2010 21:59:00 6 Penny 29.03.2010 17:25:00 7 Alba 30.03.2010 09:36:00 8 Huston 31.03.2010 10:19:00 I wanna get: 1 John 27.03.2010 18:17:00 2 Mike 27.03.2010 16:38:00 4 Anna 29.03.2010 13:51:00 5 Maria 29.03.2010 21:59:00 6 Penny 29.03.2010 17:25:00

    Read the article

  • cartesian product

    - by John Retallack
    I've been trying to do a function that returns the Cartesian Product of n sets,in Dr Scheme,the sets are given as a list of lists,I've been stuck at this all day,I would like a few guidelines as where to start,I've wrote a pice of code but it dosen't work. (define cart-n(?(l) (if (null? l) '(()) (map (?(lst) (cons (car ( car(l))) lst)) (cart-n (cdr l) )))))

    Read the article

  • Software development working from home

    - by johnhilbron
    Hi, Do you all think that working from home is the wave of the future for software development? In this day and age it seems like a logical next step for software developers to work from their homes and connect to each other using IM, video chat and phone. Etc, etc... What forces are in action pushing software development in this direction? What forces are keeping more people from working remotely? John

    Read the article

  • Caching web API proxy?

    - by Jeremy Dunck
    I was wondering if anyone knows of a caching proxy specifically for dealing with API responses? Ideally, I'd be able to declare what caching policy to use for different API semantics, e.g. cache album art for 1 day, cache favorite tweets for 5 minutes, cache map tiles forever, except invalidate when this other API is called. I know about using Apache, Squid, etc for caching in general -- I'm just hoping for something with nicer usage semantics by restricting the design goal to dealing with APIs rather than the web in general.

    Read the article

  • ie7 z-index problem

    - by rezna
    Hi, I've isolated a little test case of IE7's z-index bug, but don't know how to fix it. Have been playing with z-indeces all day long but it didn't. If someone knows, what to do about it, pls help ;) The test case is located here - http://upload.rezna.info/z-index-test.html The problem is, that in IE7 the second textbox is placed over the red list (suggest box). Thx, rezna

    Read the article

  • In MySQL, what is the most effective query design for joining large tables with many to many relatio

    - by lighthouse65
    In our application, we collect data on automotive engine performance -- basically source data on engine performance based on the engine type, the vehicle running it and the engine design. Currently, the basis for new row inserts is an engine on-off period; we monitor performance variables based on a change in engine state from active to inactive and vice versa. The related engineState table looks like this: +---------+-----------+---------------+---------------------+---------------------+-----------------+ | vehicle | engine | engine_state | state_start_time | state_end_time | engine_variable | +---------+-----------+---------------+---------------------+---------------------+-----------------+ | 080025 | E01 | active | 2008-01-24 16:19:15 | 2008-01-24 16:24:45 | 720 | | 080028 | E02 | inactive | 2008-01-24 16:19:25 | 2008-01-24 16:22:17 | 304 | +---------+-----------+---------------+---------------------+---------------------+-----------------+ For a specific analysis, we would like to analyze table content based on a row granularity of minutes, rather than the current basis of active / inactive engine state. For this, we are thinking of creating a simple productionMinute table with a row for each minute in the period we are analyzing and joining the productionMinute and engineEvent tables on the date-time columns in each table. So if our period of analysis is from 2009-12-01 to 2010-02-28, we would create a new table with 129,600 rows, one for each minute of each day for that three-month period. The first few rows of the productionMinute table: +---------------------+ | production_minute | +---------------------+ | 2009-12-01 00:00 | | 2009-12-01 00:01 | | 2009-12-01 00:02 | | 2009-12-01 00:03 | +---------------------+ The join between the tables would be engineState AS es LEFT JOIN productionMinute AS pm ON es.state_start_time <= pm.production_minute AND pm.production_minute <= es.event_end_time. This join, however, brings up multiple environmental issues: The engineState table has 5 million rows and the productionMinute table has 130,000 rows When an engineState row spans more than one minute (i.e. the difference between es.state_start_time and es.state_end_time is greater than one minute), as is the case in the example above, there are multiple productionMinute table rows that join to a single engineState table row When there is more than one engine in operation during any given minute, also as per the example above, multiple engineState table rows join to a single productionMinute row In testing our logic and using only a small table extract (one day rather than 3 months, for the productionMinute table) the query takes over an hour to generate. In researching this item in order to improve performance so that it would be feasible to query three months of data, our thoughts were to create a temporary table from the engineEvent one, eliminating any table data that is not critical for the analysis, and joining the temporary table to the productionMinute table. We are also planning on experimenting with different joins -- specifically an inner join -- to see if that would improve performance. What is the best query design for joining tables with the many:many relationship between the join predicates as outlined above? What is the best join type (left / right, inner)?

    Read the article

  • SQL replicaton - collecting data

    - by Cicik
    Hi, I have master SQL server with DB Central and a lot of satellite SQL servers with DB Client. I need to collect data from log tables(LogTable) on Client(each client has own ID in log table) to one big table on Central(LogTableCentral). Data must go only from Client to Central On each Client I want to have only data for this Client I need solution with minimal amount of work on client side because of count of clients Central is MS SQL server Enterprise, Clients are MS SQL server 2005, 2008 Thanks a lot EDIT: data can be collected periodically(for example: every day at 01:00)

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >