Search Results

Search found 48823 results on 1953 pages for 'run loop'.

Page 525/1953 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • Trying to right click on code in VS2008 causes lockup.

    - by Adam Haile
    Working on a Win32 DLL using Visual Studio 2008 SP1 and, since yesterday, whenever I try to right click on code, to go to a variable definition for example, VS completely locks up and I have to manually kill the process. To make it even weirder, whenever this happens the devenv.exe process uses exactly 25% of the CPU. And I mean exactly, never 24%, never 26%, always 25% Also, I've run ProcMon to see if devenv is actually doing something, but it's doing absolutely nothing external of the process. No disk, network, registry access. Nothing. This is getting really aggravating because I have a large code base to deal with and the only other way of jumping to the definition is to first search for it. Has anyone run into a similar issue? And, better yet, know a fix?

    Read the article

  • Multiple instances of this carousel on a single page - can't get it to work

    - by Andy
    This code comes from a tutorial so it's not originally my own work. What I am trying to do is implement this several times on a single page. I have tried and so far failed - by numbering the id "carousel" and so forth. Any help would be seriously appreciated. I'm tearing my hair out. http://jsfiddle.net/AndyMP/zcKDV/5/ For completeness.. this is the carousel JQuery as it stands. //rotation speed and timer var speed = 5000; var run = setInterval('rotate()', speed); //grab the width and calculate left value var item_width = $('#slides li').outerWidth(); var left_value = item_width * (-1); //move the last item before first item, just in case user click prev button $('#slides li:first').before($('#slides li:last')); //set the default item to the correct position $('#slides ul').css({'left' : left_value}); //if user clicked on prev button $('#prev').click(function() { //get the right position var left_indent = parseInt($('#slides ul').css('left')) + item_width; //slide the item $('#slides ul').animate({'left' : left_indent}, 200,function(){ //move the last item and put it as first item $('#slides li:first').before($('#slides li:last')); //set the default item to correct position $('#slides ul').css({'left' : left_value}); }); //cancel the link behavior return false; }); //if user clicked on next button $('#next').click(function() { //get the right position var left_indent = parseInt($('#slides ul').css('left')) - item_width; //slide the item $('#slides ul').animate({'left' : left_indent}, 200, function () { //move the first item and put it as last item $('#slides li:last').after($('#slides li:first')); //set the default item to correct position $('#slides ul').css({'left' : left_value}); }); //cancel the link behavior return false; }); //if mouse hover, pause the auto rotation, otherwise rotate it $('#slides').hover( function() { clearInterval(run); }, function() { run = setInterval('rotate()', speed); } ); //a simple function to click next link //a timer will call this function, and the rotation will begin :) function rotate() { $('#next').click(); }

    Read the article

  • Kohana -- Command Line

    - by swt83
    I'm trying to "faux-fork" a process (an email being sent via SMTP) in my web application, and the application is built on Kohana. $command = 'test/email'; exec('php index.php '.$command.' > /dev/null/ &', $errors, $response); I'm getting an error -- Notice: Undefined index: SERVER_NAME When I look into Kohana's index.php file, I see that it is looking for a variable named SERVER_NAME, but I guess it is coming up NULL because Kohana couldn't detect this value and set it prior to run. Any ideas how to get Kohana to run via command line?

    Read the article

  • Error with swig: undefined symbol: _ZN7hosters11hostersLink7getLinkEi

    - by Eduardo
    I'm trying to make a python binding for the this library: http://code.google.com/p/hosterslib/. I'm using swig, heres is the code: %module pyhosters %{ include "hosters/hosters.hpp" %} %include "hosters/hosters.hpp" I run "swig -c++ -python -o swig_wrap.cxx swig.i" and I compile with "g++ -O2 -fPIC -shared -o _pyhosters.so swig_wrap.cxx python-config --libs --cflags -lhosters -lcln -lhtmlcxx pkg-config libglog --libs --cflags -I/usr/include/python2.6 -Wall -Wextra" But when I run python and I import it, I get: import pyhosters Traceback (most recent call last): File "", line 1, in File "./pyhosters.py", line 7, in import _pyhosters ImportError: ./_pyhosters.so: undefined symbol: _ZN7hosters11hostersLink7getLinkEi How can I solve that? Thanks.

    Read the article

  • Uploading files to varbinary(max) in SQL Server -- works on one server, not the other

    - by pjabbott
    I have some code that allows users to upload file attachments into a varbinary(max) column in SQL Server from their web browser. It has been working perfectly fine for almost two years, but all of a sudden it stopped working. And it stopped working on only the production database server -- it still works fine on the development server. I can only conclude that the code is fine and there is something up with the instance of SQL Server itself. But I have no idea how to isolate the problem. I insert a record into the ATTACHMENT table, only inserting non-binary data like the title and the content type, and then chunk-upload the uploaded file using the following code: // get the file stream System.IO.Stream fileStream = postedFile.InputStream; // make an upload buffer byte[] fileBuffer; fileBuffer = new byte[1024]; // make an update command SqlCommand fileUpdateCommand = new SqlCommand("update ATTACHMENT set ATTACHMENT_DATA.WRITE(@Data, NULL, NULL) where ATTACHMENT_ID = @ATTACHMENT_ID", sqlConnection, sqlTransaction); fileUpdateCommand.Parameters.Add("@Data", SqlDbType.Binary); fileUpdateCommand.Parameters.AddWithValue("@ATTACHMENT_ID", newId); while (fileStream.Read(fileBuffer, 0, fileBuffer.Length) > 0) { fileUpdateCommand.Parameters["@Data"].Value = fileBuffer; fileUpdateCommand.ExecuteNonQuery(); <------ FAILS HERE } fileUpdateCommand.Dispose(); fileStream.Close(); Where it says "FAILS HERE", it sits for a while and then I get a SQL Server timeout error on the very first iteration through the loop. If I connect to the development database instead, everything works fine (it runs through the loop many, many times and the commit is successful). Both servers are identical (SQL Server 9.0.3042) and the schemas are identical as well. When I open Activity Monitor right after the timeout to see what's going it, it says the last command is (@Data binary(1024),@ATTACHMENT_ID decimal(4,0))update ATTACHMENT set ATTACHMENT_DATA.WRITE(@Data, NULL, NULL) where ATTACHMENT_ID = @ATTACHMENT_ID which I would expect but it also says it has a status of "Suspended" and a wait type of "PAGEIOLATCH_SH". I looked this up and it seems to be a bad thing but I can't find anything specific to my stuation. Ideas?

    Read the article

  • Generate lags R

    - by Btibert3
    Hi All, I hope this is basic; just need a nudge in the right direction. I have read in a database table from MS Access into a data frame using RODBC. Here is a basic structure of what I read in: PRODID PROD Year Week QTY SALES INVOICE Here is the structure: str(data) 'data.frame': 8270 obs. of 7 variables: $ PRODID : int 20001 20001 20001 100001 100001 100001 100001 100001 100001 100001 ... $ PROD : Factor w/ 1239 levels "1% 20qt Box",..: 335 335 335 128 128 128 128 128 128 128 ... $ Year : int 2010 2010 2010 2009 2009 2009 2009 2009 2009 2010 ... $ Week : int 12 18 19 14 15 16 17 18 19 9 ... $ QTY : num 1 1 0 135 300 270 300 270 315 315 ... $ SALES : num 15.5 0 -13.9 243 540 ... $ INVOICES: num 1 1 2 5 11 11 10 11 11 12 ... Here are the top few rows: head(data, n=10) PRODID PROD Year Week QTY SALES INVOICES 1 20001 Dolie 12" 2010 12 1 15.46 1 2 20001 Dolie 12" 2010 18 1 0.00 1 3 20001 Dolie 12" 2010 19 0 -13.88 2 4 100001 Cage Free Eggs 2009 14 135 243.00 5 5 100001 Cage Free Eggs 2009 15 300 540.00 11 6 100001 Cage Free Eggs 2009 16 270 486.00 11 7 100001 Cage Free Eggs 2009 17 300 540.00 10 8 100001 Cage Free Eggs 2009 18 270 486.00 11 9 100001 Cage Free Eggs 2009 19 315 567.00 11 10 100001 Cage Free Eggs 2010 9 315 569.25 12 I simply want to generate lags for QTY, SALES, INVOICE for each product but I am not sure where to start. I know R is great with Time Series, but I am not sure where to start. I have two questions: 1- I have the raw invoice data but have aggregated it for reporting purposes. Would it be easier if I didn't aggregate the data? 2- Regardless of aggregation or not, what functions will I need to loop over each product and generate the lags as I need them? In short, I want to loop over a set of records, calculate lags for a product (if possible), append the lags (as they apply) to the current record for each product, and write the results back to a table in my database for my reporting software to use. Any help you can provide will be greatly appreciated! Many thanks in advance, Brock

    Read the article

  • Is there an easy method to combine two relative paths in C# ?

    - by Ioannis
    I want to combine two relative paths in C#. For example: string path1 = "/System/Configuration/Panels/Alpha"; string path2 = "Panels/Alpha/Data"; I want to return string result = "/System/Configuration/Panels/Alpha/Data"; I can implement this by splitting the second array and compare it in a for loop but I was wondering if there is something similar to Path.Combine available or if this can be accomplished with regular expressions or Linq? Thanks

    Read the article

  • How to handle environment-specific application configuration organization-wide?

    - by Stuart Lange
    Problem Your organization has many separate applications, some of which interact with each other (to form "systems"). You need to deploy these applications to separate environments to facilitate staged testing (for example, DEV, QA, UAT, PROD). A given application needs to be configured slightly differently in each environment (each environment has a separate database, for example). You want this re-configuration to be handled by some sort of automated mechanism so that your release managers don't have to manually configure each application every time it is deployed to a different environment. Desired Features I would like to design an organization-wide configuration solution with the following properties (ideally): Supports "one click" deployments (only the environment needs to be specified, and no manual re-configuration during/after deployment should be necessary). There should be a single "system of record" where a shared environment-dependent property is specified (such as a database connection string that is shared by many applications). Supports re-configuration of deployed applications (in the event that an environment-specific property needs to change), ideally without requiring a re-deployment of the application. Allows an application to be run on the same machine, but in different environments (run a PROD instance and a DEV instance simultaneously). Possible Solutions I see two basic directions in which a solution could go: Make all applications "environment aware". You would pass the environment name (DEV, QA, etc) at the command line to the app, and then the app is "smart" enough to figure out the environment-specific configuration values at run-time. The app could fetch the values from flat files deployed along with the app, or from a central configuration service. Applications are not "smart" as they are in #1, and simply fetch configuration by property name from config files deployed with the app. The values of these properties are injected into the config files at deploy-time by the install program/script. That install script takes the environment name and fetches all relevant configuration values from a central configuration service. Question How would/have you achieved a configuration solution that solves these problems and supports these desired features? Am I on target with the two possible solutions? Do you have a preference between those solutions? Also, please feel free to tell me that I'm thinking about the problem all wrong. Any feedback would be greatly appreciated.

    Read the article

  • php.ini not being read with windows 7 installation

    - by Kyle
    I have installed php successfully on a Windows 7 machine but I can not for the life of me get it to read the php.ini file. I have uncommented out the line for php to use mysql and when I run phpinfo(), it never shows up. I have checked to make sure there is only one php.ini file on my entire c:\ drive and it's sitting in my c:\windows folder. has anyone else run into this and know of a solution to get php to read the .ini so that I can enable some extensions (mysql etc)?

    Read the article

  • Can't get a SQL command to recognise the params added

    - by littlechris
    Hi, I've not used basic SQL commands for a while and I'm trying to pass a param to a sproc and the run it. However when I run the code I get a "Not Supplied" error. Code: SqlConnection conn1 = new SqlConnection(DAL.getConnectionStr()); SqlCommand cmd1 = new SqlCommand("SProc_Item_GetByID", conn1); cmd1.Parameters.Add(new SqlParameter("@ID", itemId)); conn1.Open(); cmd1.ExecuteNonQuery(); I'm not really sure why this would fail. Apologies for the basic question, but I'm lost! Thanks in advance.

    Read the article

  • Directory file size calculation - how to make it faster?

    - by Xinxua
    Using C#, I am finding the total size of a directory. The logic is this way : Get the files inside the folder. Sum up the total size. Find if there are sub directories. Then do a recursive search. I tried one another way to do this too : Using FSO (obj.GetFolder(path).Size). There's not much of difference in time in both these approaches. Now the problem is, I have tens of thousands of files in a particular folder and its taking like atleast 2 minute to find the folder size. Also, if I run the program again, it happens very quickly (5 secs). I think the windows is caching the file sizes. Is there any way I can bring down the time taken when I run the program first time??

    Read the article

  • Which network protocol to use for lightweight notification of remote apps?

    - by Chris Thornton
    I have this situation.... Client-initiated SOAP 1.1 communication between one server and let's say, tens of thousands of clients. Clients are external, coming in through our firewall, authenticated by certificate, https, etc.. They can be anywhere, and usually have their own firewalls, NAT routers, etc... They're truely external, not just remote corporate offices. They could be in a corporate/campus network, DSL/Cable, even Dialup. Client uses Delphi (2005 + SOAP fixes from 2007), and the server is C#, but from an architecture/design standpoint, that shouldn't matter. Currently, clients push new data to the server and pull new data from the server on 15-minute polling loop. The server currently does not push data - the client hits the "messagecount" method, to see if there is new data to pull. If 0, it sleeps for another 15 min and checks again. We're trying to get that down to 7 seconds. If this were an internal app, with one or just a few dozen clients, we'd write a cilent "listener" soap service, and would push data to it. But since they're external, sit behind their own firewalls, and sometimes private networks behind NAT routers, this is not practical. So we're left with polling on a much quicker loop. 10K clients, each checking their messagecount every 10 seconds, is going to be 1000/sec messages that will mostly just waste bandwidth, server, firewall, and authenticator resources. So I'm trying to design something better than what would amount to a self-inflicted DoS attack. I don't think it's practical to have the server send soap messages to the client (push) as this would require too much configuration at the client end. But I think there are alternatives that I don't know about. Such as: 1) Is there a way for the client to make a request for GetMessageCount() via Soap 1.1, and get the response, and then perhaps, "stay on the line" for perhaps 5-10 minutes to get additional responses in case new data arrives? i.e the server says "0", then a minute later in response to some SQL trigger (the server is C# on Sql Server, btw), knows that this client is still "on the line" and sends the updated message count of "5"? 2) Is there some other protocol that we could use to "ping" the client, using information gathered from their last GetMessageCount() request? 3) I don't even know. I guess I'm looking for some magic protocol where the client can send a GetMessageCount() request, which would include info for "oh by the way, in case the answer changes in the next hour, ping me at this address...". Also, I'm assuming that any of these "keep the line open" schemes would seriously impact the server sizing, as it would need to keep many thousands of connections open, simultaneously. That would likely impact the firewalls too, I think. Is there anything out there like that? Or am I pretty much stuck with polling? TIA, Chris

    Read the article

  • WCF: connecting to service over internet times out

    - by Shaul
    Still on the WCF learning curve: I've set up a self-hosted WCF Service (WSDualHttpBinding), which works fine on my own computer, which resides behind a firewall. If I run the client on my own computer, everything works great. Now I installed the client on a computer outside my network, and I'm trying to access the service via a dynamic DNS, like so: http://mydomain.dyndns.org:8000/MyService. My port forwarding issues were taken care of in a previous question; I can now see the service is up in my browser. But now when I try to run the client on the other machine, I get the following error message: "The open operation did not complete within the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout." I have disabled security on the service, so that's not it. What else might be preventing the connection from happening?

    Read the article

  • DataMapper: using auto_migrate! with many-to-many dependencies?

    - by pschuegr
    Hi, I'm trying to migrate my app from MySql to Postgresql, using Rails3-pre and the latest DataMapper. I have several models which are related through many-to-many relationships using :through = Resource, which means that DataMapper creates a join table with foreign keys for both models. I can't auto_migrate! these changes, because I keep getting this: ERROR: cannot drop table users because other objects depend on it DETAIL: constraint artist_users_owner_fk on table artist_users depends on table users constraint site_users_owner_fk on table site_users depends on table users HINT: Use DROP ... CASCADE to drop the dependent objects too. I have tried everything I can think of, and thought I had things working when I added :constraint = :skip to the field definition, but I keep getting that error back when I try and run auto_migrate. I thought that :skip meant that it would ignore the dependents, but maybe that only applies for deleting rows and not dropping tables? I should mention that I can run auto_migrate after i nuke the db once, but after that, errors. Any suggestions or advice much appreciated.

    Read the article

  • Running The JVM From Within An MXML Component

    - by Joshua
    Thinking outside of the box here... What possible basic approaches could be taken in an effort to create a Flex component that could run Java? I know I can easily use flex to browse to or launch a Java app, but there are things I can only do if I can run the Java from WITHIN an MXML Component. I the strictest sense, I know it's not impossible (ie: if you had all the source code for flex and for the jvm), but what's the least impractical means to this end? Showcase your creativity.

    Read the article

  • Embarrassingly parallel workflow creates too many output files

    - by Hooked
    On a Linux cluster I run many (N > 10^6) independent computations. Each computation takes only a few minutes and the output is a handful of lines. When N was small I was able to store each result in a separate file to be parsed later. With large N however, I find that I am wasting storage space (for the file creation) and simple commands like ls require extra care due to internal limits of bash: -bash: /bin/ls: Argument list too long. Each computation is required to run through a qsub scheduling algorithm so I am unable to create a master program which simply aggregates the output data to a single file. The simple solution of appending to a single fails when two programs finish at the same time and interleave their output. I have no admin access to the cluster, so installing a system-wide database is not an option. How can I collate the output data from embarrassingly parallel computation before it gets unmanageable?

    Read the article

  • Why is it supposedly "hard" to deploy Ruby on Rails to production?

    - by johnny
    I admit that I don't follow much of anything "right" on deploying test versus production code. I have been using ASP.NET, and I typically run it locally in Visual Studio, it works, I upload it, I test it again on the production server. I have read several people say that deploying Rails apps is harder and there are special programs/ways on the ruby site about deploying RoR. I've only toyed with RoR. What is special about deployment? You don't just copy and paste the code and run it (from development machine to the production)? Is it because one is in Apache and the other running on the built in server? This will be on a Mac Server if it matters. Thank you for comments.

    Read the article

  • Avoiding dog-piling or thundering herd in a memcached expiration scenario

    - by Quintin Par
    I have the result of a query that is very expensive. It is the join of several tables and a map reduce job. This is cached in memcached for 15 minutes. Once the cache expires the queries are obviously run and the cache warmed again. But at the point of expiration the thundering herd problem issue can happen. One way to fix this problem, that I do right now is to run a scheduled task that kicks in the 14th minute. But somehow this looks very sub optimal to me. Another approach I like is nginx’s proxy_cache_use_stale updating; mechanism. The webserver/machine continues to deliver stale cache while a thread kicks in the moment expiration happens and updates the cache. Has someone applied this to memcached scenario though I understand this is a client side strategy? If it benefits, I use Django.

    Read the article

  • Saving part of audio file (java)

    - by m159701m
    Hi evryone, While playing an audio file (.wav) I want , if I resort to ctrl+c , to stop the playback and save part of the audio file in a file called "file2.wav". Here's the thread I'd like to add to my code. Unfortunately it doesn't work at all. Thanks in advance class myThread extends Thread{ public void run(){ try { PipedOutputStream poStream = new PipedOutputStream(); PipedInputStream piStream = new PipedInputStream(); poStream.connect(piStream); File cutaudioFile = new File ("file2.wav"); AudioInputStream ais = new AudioInputStream(piStream, AudioFileFormat.Type.WAVE , cutaudioFile); poStream.write(ais,AudioFileFormat.Type.WAVE,cutaudioFile); }catch (Exception e){ e.printStackTrace(); } } // end run } // end myThread

    Read the article

  • Whats the best method for queuing time-sensitive messages with PHP/MySQL?

    - by Mike Diena
    I'm building an SMS call and response system in a new app that receives a message via an aggregator gateway, checks it for functional keywords (run, stop, ask, etc), then processes it appropriately (save to the database, return an answer, or execute a task based on the users authorization). It's running fine at the moment as there are only a few users, but I figure its going to have more issues as we scale it up. We're currently running it on a single DV machine (mediatemple base dv). My question is this: does it make more sense to set something up like Memcached to run a queue, or a simple database with a daemon running to process each message one by one? I don't have much experience with either, so any advice would be helpful. Since the messaging is somewhat time-sensitive, what would be the fastest and most reliable way to handle this? Also, since we're sending responses, I'll probably need to set up and outbound message queue as well. Would it make sense to use the same concept for both?

    Read the article

  • Correct way to give users access to additional schemas in Oracle

    - by Jacob
    I have two users Bob and Alice in Oracle, both created by running the following commands as sysdba from sqlplus: create user $blah identified by $password; grant resource, connect, create view to $blah; I want Bob to have complete access to Alice's schema (that is, all tables), but I'm not sure what grant to run, and whether to run it as sysdba or as Alice. Happy to hear about any good pointers to reference material as well -- don't seem to be able to get a good answer to this from either the Internet or "Oracle Database 10g The Complete Reference", which is sitting on my desk.

    Read the article

  • Java Socket Connection is flooding network OR resulting in high ping

    - by user1461100
    i have a little problem with my java socket code. I'm writing an android client application which is sending data to a java multithreaded socket server on my pc through direct(!) wireless connection. It works fine but i want to improve it for mobile applications as it is very power consuming by now. When i remove two special lines in my code, the cpu usage of my mobile device (htc one x) is totally okay but then my connection seems to have high ping rates or something like that... Here is a server code snippet where i receive the clients data: while(true) { try { .... Object obj = in.readObject(); if(obj != null) { Class clazz = obj.getClass(); String className = clazz.getName(); if(className.equals("java.lang.String")) { String cmd = (String)obj; if(cmd.equals("dc")) { System.out.println("Client "+id+" disconnected!"); Server.connectedClients[id-1] = false; break; } if(cmd.substring(0,1).equals("!")) { robot.keyRelease(PlayerEnum.getKey(cmd,id)); } else { robot.keyPress(PlayerEnum.getKey(cmd,id)); } } } } catch .... Heres the client part, where i send my data in a while loop: private void networking() { try { if(client != null) { .... out.writeObject(sendQueue.poll()); .... } } catch .... when i write it this why, i send data everytime the while loop gets executed.. when sendQueue is empty, a null "Object" will be send. this results in "high" network traffic and in "high" cpu usage. BUT: all send comments are received nearly immediately. when i change the code to following: while(true) ... if(sendQueue.peek() != null) { out.writeObject(sendQueue.poll()); } ... the cpu usage is totally okay but i'm getting some laggs.. the commands do not arrive fast enough.. as i said, it works fine (besides cpu usage) if i'm sending data(with that null objects) every while execution. but i'm sure that this is very rough coding style because i'm kind of flooding the network. any hints? what am i doing wrong?? Thanks for your Help! Sincerly yours, maaft

    Read the article

  • How do you use printf from Assembly?

    - by bobobobo
    I have an MSVC++ project set up to compile and run assembly code. In main.c: #include <stdio.h> void go() ; int main() { go() ; // call the asm routine } In go.asm: .586 .model flat, c .code go PROC invoke puts,"hi" RET go ENDP end But when I compile and run, I get an error in go.asm: error A2006: undefined symbol : puts How do I define the symbols in <stdio.h> for the .asm files in the project?

    Read the article

  • Active Directory Programming help needed

    - by ricky2002
    Hello Friends, I want to make Windows Service in .NET which has to run on Windows Server 2003, 2008. The main functionalities i need are: As soon as a network user logs in, Display his: User name in Active Directory Domain Ip Address from where he connected I do not want to install or run any program/script on the client machine. Any help on how to go about developing this will be greatly appreciated. i saw some articles explaining this using the System.Environment namespace and some others but they only shed light for the local logged on user.

    Read the article

  • SQLite REGEXP initializer not working in production on Heroku

    - by morcutt
    I am using this to create a REGEXP in SQLite with rails because SQLite does not support REGEXP. When running this app on Heroku rather than the localhost it does not work. Is the initializer not being run when the app launches? The log files are providing .. 2011-03-04T18:35:36-08:00 app[web.1]: ActiveRecord::StatementInvalid (PGError: ERROR: syntax error at or near "REGEXP" 2011-03-04T18:35:36-08:00 app[web.1]: LINE 1: ... "posts".* FROM "posts" WHERE (message REGEXP '(?... 2011-03-04T18:35:36-08:00 app[web.1]: ^ 2011-03-04T18:35:36-08:00 app[web.1]: : SELECT "posts".* FROM "posts" WHERE (message REGEXP '(?:^|\s+)/(\w+)' and user_id = 1)): Which are similar to what the development files produced if I had deleted the implemented code. It seems as though the REGEXP initializer is not being run at startup.

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >