Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 550/701 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • wamp cannot load mysqli extension

    - by localhost
    WAMP installed fine, no problems, BUT... When going to phpMyAdmin, I get the error from phpMyAdmin as follows: "Cannot load mysqli extension. Please check your PHP configuration". Also, phpMyAdmin documentation explains this error message as follows: "To connect to a MySQL server, PHP needs a set of MySQL functions called "MySQL extension". This extension may be part of the PHP distribution (compiled-in), otherwise it needs to be loaded dynamically. Its name is probably mysql.so or php_mysql.dll. phpMyAdmin tried to load the extension but failed. Usually, the problem is solved by installing a software package called "PHP-MySQL" or something similar." Finally, the apache_error.log file has the following PHP warnings (see the mySQL warning): PHP Warning: Zend Optimizer does not support this version of PHP - please upgrade to the latest version of Zend Optimizer in Unknown on line 0 PHP Warning: Zend Platform does not support this version of PHP - please upgrade to the latest version of Zend Platform in Unknown on line 0 PHP Warning: Zend Debug Server does not support this version of PHP - please upgrade to the latest version of Zend Debug Server in Unknown on line 0 PHP Warning: gd wrapper does not support this version of PHP - please upgrade to the latest version of gd wrapper in Unknown on line 0 PHP Warning: java wrapper does not support this version of PHP - please upgrade to the latest version of java wrapper in Unknown on line 0 PHP Warning: mysql wrapper does not support this version of PHP - please upgrade to the latest version of mysql wrapper in Unknown on line 0 So, for some reason PHP is not recognizing the mysql extension. Anyone know why? Any solution or workaround?

    Read the article

  • ETL , Esper or Drools?

    - by geoaxis
    Hello, The question environment relates to JavaEE, Spring I am developing a system which can start and stop arbitrary TCP (or other) listeners for incoming messages. There could be a need to authenticate these messages. These messages need to be parsed and stored in some other entities. These entities model which fields they store. So for example if I have property1 that can have two text fields FillLevel1 and FillLevel2, I could receive messages on TCP which have both fill levels specified in text as F1=100;F2=90 Later I could add another filed say FillLevel3 when I start receiving messages F1=xx;F2=xx;F3=xx. But this is a conscious decision on the part of system modeler. My question is what do you think is better to use for parsing and storing the message. ETL (using Pantaho, which is used in other system) where you store the raw message and use task executor to consume them one by one and store the transformed messages as per your rules. One could use Espr or Drools to do the same thing , storing rules and executing them with timer, but I am not sure how dynamic you could get with making rules (they have to be made by end user in a running system and preferably in most user friendly way, ie no scripts or code, only GUI) The end user should be capable of changing the parse rules. It is also possible that end user might want to change the archived data as well (for example in the above example if a new value of FillLevel is added, one would like to put a FillLevel=-99 in the previous values to make the data consistent). Please ask for explanations, I have the feeling that I need to revise this question a bit. Thanks

    Read the article

  • Handling incremental Data Modeling Changes in Functional Programming

    - by Adam Gent
    Most of the problems I have to solve in my job as a developer have to do with data modeling. For example in a OOP Web Application world I often have to change the data properties that are in a object to meet new requirements. If I'm lucky I don't even need to programmatically add new "behavior" code (functions,methods). Instead I can declarative add validation and even UI options by annotating the property (Java). In Functional Programming it seems that adding new data properties requires lots of code changes because of pattern matching and data constructors (Haskell, ML). How do I minimize this problem? This seems to be a recognized problem as Xavier Leroy states nicely on page 24 of "Objects and Classes vs. Modules" - To summarize for those that don't have a PostScript viewer it basically says FP languages are better than OOP languages for adding new behavior over data objects but OOP languages are better for adding new data objects/properties. Are there any design pattern used in FP languages to help mitigate this problem? I have read Phillip Wadler's recommendation of using Monads to help this modularity problem but I'm not sure I understand how?

    Read the article

  • Copy an entity in Google App Engine datastore in Python without knowing property names at 'compile'

    - by Gordon Worley
    In a Python Google App Engine app I'm writing, I have an entity stored in the datastore that I need to retrieve, make an exact copy of it (with the exception of the key), and then put this entity back in. How should I do this? In particular, are there any caveats or tricks I need to be aware of when doing this so that I get a copy of the sort I expect and not something else. ETA: Well, I tried it out and I did run into problems. I would like to make my copy in such a way that I don't have to know the names of the properties when I write the code. My thinking was to do this: #theThing = a particular entity we pull from the datastore with model Thing copyThing = Thing(user = user) for thingProperty in theThing.properties(): copyThing.__setattr__(thingProperty[0], thingProperty[1]) This executes without any errors... until I try to pull copyThing from the datastore, at which point I discover that all of the properties are set to None (with the exception of the user and key, obviously). So clearly this code is doing something, since it's replacing the defaults with None (all of the properties have a default value set), but not at all what I want. Suggestions?

    Read the article

  • Code Golf: Shortest Turing-complete interpreter.

    - by ilya n.
    I've just tried to create the smallest possible language interpreter. Would you like to join and try? Rules of the game: You should specify a programming language you're interpreting. If it's a language you invented, it should come with a list of commands in the comments. Your code should start with example program and data assigned to your code and data variables. Your code should end with output of your result. It's preferable that there are debug statements at every intermediate step. Your code should be runnable as written. You can assume that data are 0 and 1s (int, string or boolean, your choice) and output is a single bit. The language should be Turing-complete in the sense that for any algorithm written on a standard model, such as Turing machine, Markov chains, or similar of your choice, it's reasonably obvious (or explained) how to write a program that after being executred by your interpreter performs the algorithm. The length of the code is defined as the length of the code after removal of input part, output part, debug statements and non-necessary whitespaces. Please add the resulting code and its length to the post. You can't use functions that make compiler execute code for you, such as eval(), exec() or similar. This is a Community Wiki, meaning neither the question nor answers get the reputation points from votes. But vote anyway!

    Read the article

  • How to create an XML document from a .NET object?

    - by JL
    I have the following variable that accepts a file name: var xtr = new XmlTextReader(xmlFileName) { WhitespaceHandling = WhitespaceHandling.None }; var xd = new XmlDocument(); xd.Load(xtr); I would like to change it so that I can pass in an object. I don't want to have to serialize the object to file first. Is this possible? Update: My original intentions were to take an xml document, merge some xslt (stored in a file), then output and return html... like this: public string TransformXml(string xmlFileName, string xslFileName) { var xtr = new XmlTextReader(xmlFileName) { WhitespaceHandling = WhitespaceHandling.None }; var xd = new XmlDocument(); xd.Load(xtr); var xslt = new System.Xml.Xsl.XslCompiledTransform(); xslt.Load(xslFileName); var stm = new MemoryStream(); xslt.Transform(xd, null, stm); stm.Position = 1; var sr = new StreamReader(stm); xtr.Close(); return sr.ReadToEnd(); } In the above code I am reading in the xml from a file. Now what I would like to do is just work with the object, before it was serialized to the file. So let me illustrate my problem using code public string TransformXMLFromObject(myObjType myobj , string xsltFileName) { // Notice the xslt stays the same. // Its in these next few lines that I can't figure out how to load the xml document (xd) from an object, and not from a file.... var xtr = new XmlTextReader(xmlFileName) { WhitespaceHandling = WhitespaceHandling.None }; var xd = new XmlDocument(); xd.Load(xtr); }

    Read the article

  • PHP - Database schema: version control, branching, migrations.

    - by Billiam
    I'm trying to come up with (or find) a reusable system for database schema versioning in php projects. There are a number of Rails-style migration projects available for php. http://code.google.com/p/mysql-php-migrations/ is a good example. It uses timestamps for migration files, which helps with conflicts between branches. General problem with this kind of system: When development branch A is checked out, and you want to check out branch B instead, B may have new migration files. This is fine, migrating to newer content is straight forward. If branch A has newer migration files, you would need to migrate downwards to the nearest shared patch. If branch A and B have significantly different code bases, you may have to migrate down even further. This may mean: Check out B, determine shared patch number, check out A, migrate downwards to this patch. This must be done from A since the actual applied patches are not available in B. Then, checkout branch B, and migrate to newest B patch. Reverse process again when going from B to A. Proposed system: When migrating upwards, instead of just storing the patch version, serialize the whole patch in database for later use, though I'd probably only need the down() method. When changing branches, compare patches that have been run to patches that are available in the destination branch. Determine nearest shared patch (or oldest difference, maybe) between db table of run patches and patches in destination branch by ID or hash. Could also look for new or missing patches that are buried under a number of shared patches between the two branches. Automatically merge down to the nearest shared patch, using the db table stored down() methods, and then merge up to the branche's latest patch. My question is: Is this system too crazy and/or fraught with consequences to bother developing? My experience with database schema versioning is limited to PHP autopatch, which is an up()-only system requiring filenames with sequential IDs.

    Read the article

  • Handling dependencies with IoC that change within a single function call

    - by Jess
    We are trying to figure out how to setup Dependency Injection for situations where service classes can have different dependencies based on how they are used. In our specific case, we have a web app where 95% of the time the connection string is the same for the entire Request (this is a web application), but sometimes it can change. For example, we might have 2 classes with the following dependencies (simplified version - service actually has 4 dependencies): public LoginService (IUserRepository userRep) { } public UserRepository (IContext dbContext) { } In our IoC container, most of our dependencies are auto-wired except the Context for which I have something like this (not actual code, it's from memory ... this is StructureMap): x.ForRequestedType().Use() .WithCtorArg("connectionString").EqualTo(Session["ConnString"]); For 95% of our web application, this works perfectly. However, we have some admin-type functions that must operate across thousands of databases (one per client). Basically, we'd want to do this: public CreateUserList(IList<string> connStrings) { foreach (connString in connStrings) { //first create dependency graph using new connection string ???? //then call service method on new database _loginService.GetReportDataForAllUsers(); } } My question is: How do we create that new dependency graph for each time through the loop, while maintaining something that can easily be tested?

    Read the article

  • Exam Questions that use .Demand or .LinkDemand COULD NOT BE ANY MORE CONFUSING OR AMBIGIOUS ????

    - by IbrarMumtaz
    I am 110% sure this is WRONG !!!! Q.12) You develop a library, and want to ensure that the functions in the library cannot be either directly or indirectly invoked by applications that are not running on the local intranet. What attribute would you add to each method? A. [UrlIdentityPermission(SecurityAction.RequestRefuse, Url="http://myintranet")] B. [UrlIdentityPermission(SecurityAction.LinkDemand, Url="http://myintranet")] (correct answer) C. [UrlIdentityPermission(SecurityAction.Demand, Url="http://myintranet")] D. [UrlIdentityPermission(SecurityAction.Assert, Url="http://myintranet")] Explanation Link-Demand should be used as it ensures that all callers in the call stack have the necessary permission. In this case it ensures that all callers in the call stack are on the local intranet. There is an indentical question on Transencer so I already had a clue what was goin but Transcender was much more informative that this drivel as it mentioned class level and not assembly level. It also mentioned that some callers maybe coming externally from the company intranet via authroised and authenticated credentials. With information is easy to see why .Demand on would be wong option to go for? So Transcender was right .... so I thgt fine, that makes sense. With think information still fresh in my brain I had a good idea was was going on in the question. To my surprise .Demand was wrong agin !!!! WHAT? I am really starting to hate this setting now? I cannot be any more p*ssed right now!!! :@ Thanks For Reading, Ibrar

    Read the article

  • Worklight client-side API to overrideBackButton doesn't work

    - by user2503429
    I want to make something happen when back button is pressed so I put this code in Myhtml.js file : function wlCommonInit(){ /* * Application is started in offline mode as defined by a connectOnStartup property in initOptions.js file. * In order to begin communicating with Worklight Server you need to either: * * 1. Change connectOnStartup property in initOptions.js to true. * This will make Worklight framework automatically attempt to connect to Worklight Server as a part of application start-up. * Keep in mind - this may increase application start-up time. * * 2. Use WL.Client.connect() API once connectivity to a Worklight Server is required. * This API needs to be called only once, before any other WL.Client methods that communicate with the Worklight Server. * Don't forget to specify and implement onSuccess and onFailure callback functions for WL.Client.connect(), e.g: * * WL.Client.connect({ * onSuccess: onConnectSuccess, * onFailure: onConnectFailure * }); * */ // Common initialization code goes here } WL.App.overrideBackButton(backFunc); function backFunc(){ alert('You will back to previous page'); } but after build and deploy the app and I running it, nothing happened after I pressed back button, anybody have the solution ?

    Read the article

  • Flex - Increase timeout on a PHP service function call

    - by Travesty3
    I'm using Flash Builder 4 Beta 2. I have it connecting to a PHP service. The way I set this up was using the wizard, so I didn't actually write the code to connect to it. The service looks like this: package services.flash { import mx.rpc.AsyncToken; import com.adobe.fiber.core.model_internal; import mx.rpc.AbstractOperation; import valueObjects.CustomDatatype8; import valueObjects.NewUsageData; import mx.collections.ItemResponder; import mx.rpc.remoting.RemoteObject; import mx.rpc.remoting.Operation; import com.adobe.fiber.services.wrapper.RemoteObjectServiceWrapper; import com.adobe.fiber.valueobjects.AvailablePropertyIterator; import com.adobe.serializers.utility.TypeUtility; [ExcludeClass] internal class _Super_FLASH extends RemoteObjectServiceWrapper { // Constructor public function _Super_FLASH() { // initialize service control _serviceControl = new RemoteObject(); var operations:Object = new Object(); var operation:Operation; operation = new Operation(null, "sendCommand"); operation.resultType = Object; operations["sendCommand"] = operation; ... } } One of the functions that I'm calling fetches users from a MySQL database. There are about 30,000 users right now. The service seems to timeout when fetching more than around 22,000 rows, I get the "Channel Disconnected before an acknowledgement was received" error. If I call the PHP script from a browser, it fetches them all with no problems at all, however. I have tried increasing the timeout in the PHP script (which didn't work), but obviously this isn't the problem since the browser is able to pull them up with no problems. Is there a way to increase the timeout of the PHP service in Flash Builder? I'm a bit of a noob when it comes to Flash, so please be descriptive. Thanks in advance!

    Read the article

  • What Happens to Commit Logs on a Branch After Merging?

    - by Levi Hackwith
    Scenario: Programmer creates a branch for project 'foo' called 'my_foo' at revision 5 Programmer makes multiple changes to multiple files as he works on the 'my_foo' feature. At the end of each major step, say adding several new functions to class, the programmer does an svn commit on the appropriate files therefore committing them to the branch After several weeks and many commits later (each commit having a commit log describing what he did), the programmer merges the branch back into the trunk: #Assume the following is being done from inside a working copy of the trunk: svn merge -r 5:15 file:///path/to/repo/branches/my_foo Hazzah! he's merged all his changes back into trunk! There's much rejoicing and drinking of Mountain Dew. Now let's say another programmer comes along a week later and updates their working copy from revision 5 to revision 15. "Wow", they say. "I wonder what's changed since revision 5". The programmer then does an svn status on their working copy and they get something like this: ------------------------------------------------------------------------ r15 | programmer1 | 2010-03-20 21:27:04 -0400 (Sat, 20 Mar 2010) | 1 line Merging Version 2.0 Changes into trunk ------------------------------------------------------------------------ r5 | programmer2 | 2010-02-15 10:59:55 -0500 (Mon, 15 Feb 2010) | 1 line Added assets/images/tumblr_icon.png to trunk What the heck happened to all the notes that the other programmer put in with all of his commits in his branch? Do those not get pulled over during a merge? Am I crazy or just forgetting something?

    Read the article

  • Is it possible to run my Windows Form application in Windows CE platform?

    - by Fakhrul
    I am new in Windows CE development and never done it yet. Need some advise from the expert in here. In our current project, we are developing a client-server application. The client side is using a windows form application that are base on Windows XP OS while the server is a web base application. This question are related to the client application (Windows Form). This application are using Sql Server Express Edition for data storage. The data is stored in XML object format. It also can transfer a data from client to server via web service. It also interact with hardware such as Magnetic Stripe Reader, Contactless Smart Card Reader, and a thermal printer. Most of the communication between hardware device and systems are base on Serial Port. It is use standard app.config for the configuration and is a multi threaded application. There is a new requirement to use a Handheld device which is use a Windows CE platform. This handheld included the required equipment such as Contactless Smart Card Reader, Printer and Magnetic Stripe Reader. Instead of developing a new client application, is it possible to me to convert my current application that are base on Windows XP to Windows CE? If yes, how can I do that? If no, is it any other brilliant suggestion to do this? Thanks in advance. Software Engineer

    Read the article

  • What is the best way to attach extenstion methods to static classes rather than to instances of a cl

    - by John Gietzen
    If I have a method for calculating the greatest common divisor of two integers as: public static int GCD(int a, int b) { return b == 0 ? a : GCD(b, a % b); } What would be the best way to attach that to the System.Math class? Here are the three ways I have come up with: public static int GCD(this int a, int b) { return b == 0 ? a : b.GCD(a % b); } // Lame... var gcd = a.GCD(b); and: public static class RationalMath { public static int GCD(int a, int b) { return b == 0 ? a : GCD(b, a % b); } } // Lame... var gcd = RationalMath.GCD(a, b); and: public static int GCD(this Type math, int a, int b) { return b == 0 ? a : typeof(Math).GCD(b, a % b); } // Neat? var gcd = typeof(Math).GCD(a, b); The desired syntax is Math.GCD since that is the standard for all mathematical functions. Any suggestions? What should I do to get the desired syntax?

    Read the article

  • Passing an TAdoDataset as parameter using TADOStoredProc

    - by Salvador
    i have an oracle stored procedure with 2 parameters declarated as input cursors. how i can assign this parameters the TADOStoredProc component? ORACLE PROCEDURE MYSTOREDPROCEDURE(P_HEADER IN TCursor, P_DETAL IN TCursor, P_RESULT OUT VARCHAR2) BEGIN //My code goes here END; Delphi function TMyClass.Add(Header, Detail: TADODataSet;var _Result: string): boolean; Var StoredProc : TADOStoredProc; begin Result:=False; StoredProc:=TADOStoredProc.Create(nil); try StoredProc.Connection :=ADOConnection1; StoredProc.ProcedureName:='MYSTOREDPROCEDURE'; StoredProc.Parameters.Refresh; StoredProc.Parameters.ParamByName('P_HEADER').Value :=Header;//How can assign this parameter? try StoredProc.Open; _Result:=VarToStrNull(StoredProc.Parameters.ParamByName('P_RESULT').Value); StoredProc.Close; Result:=True; except on E : Exception do begin _Result:=E.Message; //exit; end; end; finally StoredProc.Free; end; end;

    Read the article

  • Syntax Error in MySql StoredProc

    - by karthik
    I am using the below stored proc in mysql to generate the insert statements. I am getting the following error : Script line: 4 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '\’”\’,',’ifnull(‘,column_name,’,””)’,',\’”\’)')) INTO @S' at line 12 What would be the syntax problem in that ? DELIMITER $$ DROP PROCEDURE IF EXISTS `InsGen` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsGen`(in_db varchar(20),in_table varchar(20),in_file varchar(100)) BEGIN declare Whrs varchar(500); declare Sels varchar(500); declare Inserts varchar(2000); declare tablename varchar(20); set tablename=in_table; select tablename; # Comma separated column names – used for Select select group_concat(concat(‘concat(\’”\’,',’ifnull(‘,column_name,’,””)’,',\’”\’)')) INTO @Sels from information_schema.columns where table_schema=’test’ and table_name=tablename; # Comma separated column names – used for Group By select group_concat(‘`’,column_name,’`') INTO @Whrs from information_schema.columns where table_schema=’test’ and table_name=tablename; #Main Select Statement for fetching comma separated table values set @Inserts=concat(“select concat(‘insert into “, in_db,”.”,tablename,” values(‘,concat_ws(‘,’,”,@Sels,”),’);’) from “, in_db,”.”,tablename,” group by “,@Whrs, ” INTO OUTFILE ‘”, in_file ,”‘”); PREPARE Inserts FROM @Inserts; EXECUTE Inserts; END $$ DELIMITER ;

    Read the article

  • Strange execution times in t-sql

    - by TonyP
    Hi All I have two stored procedures, the first one calls the second .. If I execute the second one alone it takes over 5 minutes to complete.. But when executed within the first one it takes little over 1 minute.. What is the reason ! Here is the first one ALTER procedure [dbo].[schRefreshPriceListItemGroups] as begin tran delete from PriceListItemGroups if @@error !=0 goto rolback Insert PriceListItemGroups(comno,t$cuno,t$cpls,t$cpgs,t$dsca,t$cpru) SELECT distinct c.comno,c.t$cuno, c.t$cpls,I.t$cpgs,g.t$dsca,g.t$cpru FROM TTCCOM010nnn C JOIN TTDSLS032nnn PL ON PL.comno = c.Comno and PL.t$cpls = c.t$cpls JOIN TTIITM001nnn I ON I.t$item = pl.t$item AND I.comno = pl.comNo JOIN TTCMCS024nnn G ON g.T$cprg = I.t$cpgs AND g.comno = I.Comno WHERE c.t$cpls !='' order by comno desc, t$cuno, t$cpgs if @@error !=0 goto rolback ----------------------------------------------------- Exec scrRefreshCustomersCatalogs ----------------------------------------------------- commit tran return rolback: Rollback tran And the second one Alter proc scrRefreshCustomersCatalogs as declare @baanIds table(id int identity(1,1),baanId varchar(12)) declare @baanId varchar(12),@i int, @n int Insert @baanIds(BaanId) select baanId from ftElBaanIds() SELECT @I=1,@n=max(id) from @baanIds select @i,@n Begin tran if @@error !=0 goto xRollBack WHILE @I <=@n Begin select @baanId=baanId from @baanIds where id=@i if @@error !=0 goto xRollBack Delete from customersCatalogs where comno+'-'+t$cuno=@baanId print Convert(varchar,@i)+' baanId='+@baanId Insert customersCatalogs exec customersCatalog @baanId if @@error !=0 goto xRollBack set @i=@i+1; end Commit Tran Update statistics customersCatalogs with fullscan Return xRollBack: Print '*****Rolling back*************' Rollback tran

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Sorting Arrays by More the One Value, and Prioritizing the Sort based on Column data.

    - by Mark Tomlin
    I'm looking for a way to sort an array (we call this a row), with an array of values (that I'll call columns). Each row has columns that must be sorted based on the priority of: timetime, lapcount & timestamp. Each column cotains this information: split1, split2, split3, laptime, lapcount, timestamp. laptime if in hundredths of a second. (1:23.45 or 1 Minute, 23 Seconds & 45 Hundredths is 8345.) Lapcount is a simple unsigned tiny int, or unsigned char. timestamp is unix epoch. The lowest laptime should be at the get a better standing in this sort. Should two peoples laptimes equal, then timestamp will be used to give the better standing in this sort. Should two peoples timestamp equal, then the person with less of a lapcount get's the better standing in this sort. By better standing, I mean closer to the top of the array, closer to the index of zero where it a numerical array. I think the array sorting functions built into php can do this with a callback, I was wondering what the best approch was for a weighted sort like this would be.

    Read the article

  • Sorting Arrays by More the One Value, and Prioritizing the Sort based on Column data.

    - by Mark Tomlin
    I'm looking for a way to sort an array, based on the information in each row, based on the information in certain cells, that I'll call columns. Each row has columns that must be sorted based on the priority of: timetime, lapcount & timestamp. Each column cotains this information: split1, split2, split3, laptime, lapcount, timestamp. laptime if in hundredths of a second. (1:23.45 or 1 Minute, 23 Seconds & 45 Hundredths is 8345.) Lapcount is a simple unsigned tiny int, or unsigned char. timestamp is unix epoch. The lowest laptime should be at the get a better standing in this sort. Should two peoples laptimes equal, then timestamp will be used to give the better standing in this sort. Should two peoples timestamp equal, then the person with less of a lapcount get's the better standing in this sort. By better standing, I mean closer to the top of the array, closer to the index of zero where it a numerical array. I think the array sorting functions built into php can do this with a callback, I was wondering what the best approch was for a weighted sort like this would be.

    Read the article

  • Working around "one executable per project" in Visual C# for many small test programs

    - by Kevin Ivarsen
    When working with Visual Studio in general (or Visual C# Express in my particular case), it looks like each project can be configured to produce only one output - e.g. a single executable or a library. I'm working on a project that consists of a shared library and a few application, and I already have one project in my solution for each of those. However, during development I find it useful to write small example programs that can run one small subsystem in isolation (at a level that doesn't belong in the unit tests). Is there a good way to handle this in Visual Studio? I'd like to avoid adding several dozen separate projects to my solution for each small test program I write, especially when these programs will typically be less than 100 lines of code. I'm hoping to find something that lets me continue to work in Visual Studio and use its build system (rather than moving to something like NAnt). I could foresee the answer being something like: A way of setting this up in Visual Studio that I haven't found yet A GUI like NUnit's graphical runner that searches an assembly for classes with defined Main() functions that you can select and run A command line tool that lets you specify an assembly and a class with a Main function to run

    Read the article

  • Dataflow Pipeline holding on to memory

    - by Jesse Carter
    I've created a Dataflow pipeline consisting of 4 blocks (which includes one optional block) which is responsible for receiving a query object from my application across HTTP and retrieving information from a database, doing an optional transform on that data, and then writing the information back in the HTTP response. In some testing I've done I've been pulling down a significant amount of data from the database (570 thousand rows) which are stored in a List object and passed between the different blocks and it seems like even after the final block has been completed the memory isn't being released. Ram usage in Task Manager will spike up to over 2 GB and I can observe several large spikes as the List hits each block. The signatures for my blocks look like this: private TransformBlock<HttpListenerContext, Tuple<HttpListenerContext, QueryObject>> m_ParseHttpRequest; private TransformBlock<Tuple<HttpListenerContext, QueryObject>, Tuple<HttpListenerContext, QueryObject, List<string>>> m_RetrieveDatabaseResults; private TransformBlock<Tuple<HttpListenerContext, QueryObject, List<string>>, Tuple<HttpListenerContext, QueryObject, List<string>>> m_ConvertResults; private ActionBlock<Tuple<HttpListenerContext, QueryObject, List<string>>> m_ReturnHttpResponse; They are linked as follows: m_ParseHttpRequest.LinkTo(m_RetrieveDatabaseResults); m_RetrieveDatabaseResults.LinkTo(m_ConvertResults, tuple => tuple.Item2 is QueryObjectA); m_RetrieveDatabaseResults.LinkTo(m_ReturnHttpResponse, tuple => tuple.Item2 is QueryObjectB); m_ConvertResults.LinkTo(m_ReturnHttpResponse); Is it possible that I can set up the pipeline such that once each block is done with the list they no longer need to hold on to it as well as once the entire pipeline is completed that the memory is released?

    Read the article

  • Fastest PNG decoder for .NET

    - by sboisse
    Our web server needs to process many compositions of large images together before sending the results to web clients. This process is performance critical because the server can receive several thousands of requests per hour. Right now our solution loads PNG files (around 1MB each) from the HD and sends them to the video card so the composition is done on the GPU. We first tried loading our images using the PNG decoder exposed by the XNA API. We saw the performance was not too good. To understand if the problem was loading from the HD or the decoding of the PNG, we modified that by loading the file in a memory stream, and then sending that memory stream to the .NET PNG decoder. The difference of performance using XNA or using System.Windows.Media.Imaging.PngBitmapDecoder class is not significant. We roughly get the same levels of performance. Our benchmarks show the following performance results: Load images from disk: 37.76ms 1% Decode PNGs: 2816.97ms 77% Load images on Video Hardware: 196.67ms 5% Composition: 87.80ms 2% Get composition result from Video Hardware: 166.21ms 5% Encode to PNG: 318.13ms 9% Store to disk: 3.96ms 0% Clean up: 53.00ms 1% Total: 3680.50ms 100% From these results we see that the slowest parts are when decoding the PNG. So we are wondering if there wouldn't be a PNG decoder we could use that would allow us to reduce the PNG decoding time. We also considered keeping the images uncompressed on the hard disk, but then each image would be 10MB in size instead of 1MB and since there are several tens of thousands of these images stored on the hard disk, it is not possible to store them all without compression.

    Read the article

  • purpose of 3rd party mvc ?

    - by Honey
    ive seen many third party mvcs or frameworks such as codeignitor , cakephp, and so on. what i want to know is what are their purposes? ive created my own framework call it an mvc or framework (in my opinion their all the same). in my framework i have all the classes in one folder called classes and all functions in another. its all organized and when a new project comes in i am able to complete it fast. i have looked at the applications that i mentioned and it seems to have huge articles and tutorials to study. what is the purpose? why not study the main language such as php, javascript/ajax or jquery, and so on then build something that you know the ins and outs of so that any project comes your way you know what to do. ive known some people who use cakephp and for every project they get stuck and need to figure out what to do. another guy i knew worked with joomla and every basic company website that came his way he would reverse engineer joomla to make it work with the site. are people using these applications because they lack knowledge in the languages? or sometimes have no choice but to make a site while lacking language and put something together.

    Read the article

  • Wordpress thumbnail creation question

    - by Will Ashworth
    I'm trying to use Wordpress' built-in thumbnailing and image re-sizing in my Wordpress 2.9.2 installation. I'm trying to get various sizes (post listing/results 160x160 & "single.php" 618x150) and for some reason the single.php one works, but only half way. Not sure if I'm doing something wrong here. I have it working…sorta. I’m totally stuck and there seems to be a lack of documentation on the Codex for this feature so here goes. The small 160×160 thumbnail for article listings/search views works fine. It crops it, all’s groovy. The issue comes when I go to format the image for the single.php article details view. It crops, but then scales down even further for some reason. Screenshot: http://c1319072.cdn.cloudfiles.rackspacecloud.com/4-15-2010%204-56-46%20PM.png NOTE: every time I re-test this I’m completely deleting the image from the media section and re-uploading the image entirely. I also have the re-create thumbnails plugin so I know it’s not caching. Here is my code included in "functions.php". This will help in debugging. add_theme_support( ‘post-thumbnails’ ); set_post_thumbnail_size( 160, 160, true ); // Normal post thumbnails add_image_size( ’single-post-thumbnail’, 618, 150, true ); // Permalink thumbnail size

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >