Search Results

Search found 20208 results on 809 pages for 'compiled query'.

Page 454/809 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • jqGrid : "All in One" approach width jqGridEdit Class > how to set a composite primary key ?

    - by Qualliarys
    Hello, How to set a composite primary key for a "All in One" approach (grid defined in JS file, and data using jqGridEdit Class in php file) ? Please, for me a composite primary key of a table T, is a elementary primary key that is defined with some fields belong to this table T ! Here is my test, but i get no data and cannot use the CRUD operations : In my JS file i have this lines code: ... colModel":[ {"name":"index","index":"index","label":"index"}, // <= THAT'S JUST THE INDEX OF MY TABLE {"name":"user","index":"user","label":"user","key":true}, // <= A PART OF MY COMPOSITE PRIMARY KEY {"name":"pwd","index":"pwd","label":"pwd","key":true}, // <= A PART OF MY COMPOSITE PRIMARY KEY {"name":"state","index":"state","label":"state","key":true}, // <= A PART OF MY COMPOSITE PRIMARY KEY ... <= AND SO ON "url":"mygrid_crud.php", "datatype":"json", "jsonReader":{repeatitems:false}, "editurl": "mygrid_crud.php", "prmNames":{"id":"index"} // <= WHAT I NEED TO WRITE HERE ??? ... In my php file (mygrid_crud.php) : ... $grid = new jqGridEdit($conn); $query = "SELECT * FROM mytable WHERE user='$user' and pwd='$pwd' and state='$state'..."; // <= SELECT * it's ok or i need to specify all fields i need ? $grid->SelectCommand = $query; $grid->dataType = "json"; $grid->table = 'mytable'; $grid->setPrimaryKeyId('index'); // <= WHAT I NEED TO WRITE HERE ??? ... $grid->editGrid(); Please, say me what is wrong, and how to do to set a composite primary key in this approach !? Thank you so much for tour responses.

    Read the article

  • Best way to store and deliver files on a CMS

    - by Tio
    Hi all.. First of all, this isn't another question about storing images on DB vs file system. I'm already storing the images on the file system. I'm just struggling to find a better way to show them to all my users. I' currently using this system. I generate a random filename using md5(uniqueid()), the problem that I'm facing, is that the image is then presented with a link like this: <img src="/_Media/0027a0af636c57b75472b224d432c09c.jpg" /> As you can see this isn't the prettiest way to show a image ( or a file ), and the name doesn't say anything about the image. I've been working with a CMS system at work, that stores all uploaded files on a single table, to access that image it uses something like this: <img src="../getattachment/94173104-e03c-499b-b41c-b25ae05a8ee1/Menu-1/Escritorios.aspx?width=175&height=175" /> As you can see the path to the image, now has a meaning compared to the other one, the problem is that this put's a big strain in the DB, for example, in the last site I made, I have an area that has around 60 images, to show the 60 images, I would have to do at least 60 individual query's to the database, besides the other query's to retrieve the various content on the page. I think you understand my dilemma, has anyone gone trough this problem, that can give me some pointers on how to solve this? Thanks..

    Read the article

  • ASP.net SessionState Error in Design Mode

    - by stringo0
    I'm getting a weird error in the design view for a user creation page for 2 controls: Error Creating Control - wCreateUser Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. (There's some more) I've done both of these, but I'm still getting the error in design mode. The controls work fine when compiled, and on the live site - this is just in the Visual Web Developer 2010 Design view for the page. Any ideas as to how I can resolve this? Thanks!

    Read the article

  • ASP.NET plug-in architecture, settings problem

    - by Xaqron
    I want to divide business layer (BLL) of an asp.net application into multiple components. Each component is a .NET class library which is compiled as a standalone DLL. These components should have their own configuration files. For example "MyNameSpace.Users.dll" contains classes about users of the website and there's a password policy to check if password length is at least x characters. When webmaster edits the config file of this DLL and set x to y then component (DLL) should use new value (y) in the future and enforce passwords to be at least y characters. I want each component as a single project and compile them separaely (and not to put all projects in a solution in Visual Studio), and put the DLLs of these libraries into the "Bin" folder of my ASP.NET application. Is it possible ? Where should I put these config files ?

    Read the article

  • MySQL Connector: parameters not being added

    - by LookitsPuck
    Hey all! Looking at my query log for MySQL, I see my parameters aren't being added. Here's my code: MySqlConnection conn = new MySqlConnection(ApplicationVariables.ConnectionString()); MySqlCommand com = new MySqlCommand(); try { conn.Open(); com.Connection = conn; com.CommandText = String.Format(@"SELECT COUNT(*) AS totalViews FROM pr_postreleaseviewslog AS prvl WHERE prvl.dateCreated BETWEEN (@startDate) AND (@endDate) AND prvl.postreleaseID IN ({0})" , ids); com.CommandType = CommandType.Text; com.Parameters.Add(new MySqlParameter("@startDate", thisCampaign.Startdate)); com.Parameters.Add(new MySqlParameter("@endDate", endDate)); numViews = Convert.ToInt32(com.ExecuteScalar()); } catch (Exception ex) { } finally { conn.Dispose(); com.Dispose(); } Looking at the query log, I see this: SELECT COUNT(*) AS totalViews FROM pr_postreleaseviewslog AS prvl WHERE prvl.dateCreated BETWEEN (@startDate) AND (@endDate) AND prvl.postreleaseID IN (1,2) I've used the MySQL .NET connector on countless projects (I actually have a base class that takes care of opening these connections, and closing them with transactions, etc.). However, I took over this application, and here I am now. Thanks for the help!

    Read the article

  • From interpeted to native code: "dynamic" languages compiler support

    - by Daniel
    First, I am aware that dynamic languages is a term used mainly by a vendor; I am using it just to have a container word to include languages like Perl (a favorite of mine), Python, Tcl, Ruby, PHP and so on. They are interpreted but I am interested here to refer to languages featuring strong capability to support the programmer efficiency and the support for typical constructs of modern interpreted languages My question is: there are dynamic languages can be compiled efficiently in native executable code - typically for Windows platforms? Which ones? Maybe using some third part ad-hoc tools? I am not talking about huge executables carrying with them a full interpreter or some similar tricks nor some smart module able to include its own dependances or some required modules, but a honest, straight, standard, solid executable code. If not, there is some technical reason inhibiting the availability of such a best-of-both-world feature? Thanks! Daniel

    Read the article

  • Vertically Merge Multiple Tables in MySQL by Joint Primary Key

    - by world
    Hello, I'll attempt to make my question as clear as possible. I'm fairly unexperienced with SQL, only know the really basic queries. In order to have a better idea I'd been reading the MySQL manual for the past few days, but I couldn't really find a concrete solution and my needs are quite specific. I've got 3 MySQL MyISAM tables: table1, table2 and table3. Each table has an ID column (ID, ID2, ID3 respectively), and different data columns. For example table1 has [ID, Name, Birthday, Status, ...] columns, table2 has [ID2, Country, Zip, ...], table3 has [ID3, Source, Phone, ...] you get the idea. The ID, ID2, ID3 columns are common to all three tables... if there's an ID value in table1 it will also appear in table2 and table3. The number of rows in these tables is identical, about 10m rows in each table. What I'd like to do is create a new table that contains (most of) the columns of all three tables and merge them into it. The dates, for instance, must be converted because right now they're in VARCHAR YYYYMMDD format. Reading the MySQL manual I figured STR_TO_DATE() would do the job, but I don't know how to write the query itself in the first place so I have no idea how to integrate the date conversion. So basically, after I create the new table (which I do know how to do), how can I merge the three tables into it, integrating into the query the date conversion?

    Read the article

  • Regular expression for finding non-breaking string names in code and then breaking them up for SQL q

    - by Rob Segal
    I am trying to devlop a regex for finding camel case strings in several code files I am working with so I can break them up into separate words for use in a SQL query. I have strings of the form... EmailAddress FirstName MyNameIs And I want them like this... Email Address First Name My Name Is An example SQL query which I currently have is... select FirstName, MyNameIs from MyTables I need the queries in the form... select FirstName as 'First Name', MyNameIs as 'My Name Is' from MyTables Any time a new capital letter appears that should be a new grouping which I can pick out of the matched string. I currently have the following regex... ([A-Z][a-z]+)+ Which does match the cases I have shown above but when I want to perform a replace I need to define groups. Currently I have tried... (([A-Z])([a-z]+))+ Which sort of works. It will pick out "Address" as the first grouping from "EmailAddress" as opposed to "Email" which is what I was expecting. No doubt there is something I'm misunderstanding here so any help is greatly appreciated.

    Read the article

  • PDO Database Connections Problem

    - by Metropolis
    Hey Everyone, Over a year ago I created my own database classes which use PDO, and handle all preparing, executing, and closing connections. These classes have been working great up until now. There are two different database severs I am grabbing from, MySQL, and MS SQL Express. I am retrieving an employee id from the MySQL server and using it to get that employees information from the MS SQL server. There are about 11k records coming from the MySQL server and my program is only making it through 1200 before crashing with an error like the following. Connection failed (odbc:Driver=FreeTDS;Servername=MSSQLExpress;Database=SMDINC) Class (PDOException) SQLSTATE[08001] SQLDriverConnect: 0 [unixODBC][FreeTDS][SQL Server]Unable to connect to data source It seems like the program is not able to connect to the data source, but it is running the exact same query about 30 times before this and having no problem. Also, I have thoroughly checked all of the data coming into the query and it all looks fine. I believe the issue may be that there are to many connections being created, but I have tried to close all connections in many different places, and nothing seems to be fixing the problem. Any debugging help, or suggestions would be appreciated! Craig Metrolis

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • Stack trace for C++ using gcc

    - by dimba
    We use stack traces in proprietary assert like macro to catch developer mistakes - when error is caught, stack trace is printed. I find gcc's pair backtrace()/backtrace_symbols() methods insufficient: Names are mangled No line information 1st problem can be resolved by abi::__cxa_demangle. However 2nd problem s more tough. I found replacement for backtrace_symbols(). This is better than gcc's backtrace_symbols(), since it can retrieve line numbers (if compiled with -g) and you don't need to compile with -rdynamic. Hoverer the code is GNU licenced, so IMHO I can't use it in commercial code. Any proposals?

    Read the article

  • What's the best way to select max over multiple fields in SQL?

    - by allyourcode
    The I kind of want to do is select max(f1, f2, f3). I know this doesn't work, but I think what I want should be pretty clear (see update 1). I was thinking of doing select max(concat(f1, '--', f2 ...)), but this has various disadvantages. In particular, doing concat will probably slow things down. What's the best way to get what I want? update 1: The answers I've gotten so far aren't what I'm after. max works over a set of records, but it compares them using only one value; I want max to consider several values, just like the way order by can consider several values. update 2: Suppose I have the following table: id class_name order_by1 order_by_2 1 a 0 0 2 a 0 1 3 b 1 0 4 b 0 9 I want a query that will group the records by class_name. Then, within each "class", select the record that would come first if you ordered by order_by1 ascending then order_by2 ascending. The result set would consist of records 2 and 3. In my magical query language, it would look something like this: select max(* order by order_by1 ASC, order_by2 ASC) from table group by class_name

    Read the article

  • .NET 2.0 vs .NET 4.0 loading error

    - by David Rutten
    My class library is compiled against .NET 2.0 and works just fine whenever I try to load it as a plugin under the 2.0 runtime. If however the master application is running the .NET 4.0 runtime, I get an exception as soon as the resources need to be accessed: Exception occurred during processing of command: Grasshopper Plug-in = Grasshopper Could not find file 'Grasshopper.resources'. Stack trace: at UnhandledExceptionLogger.UnhandledThreadException(Object sender, ThreadExceptionEventArgs args) at System.Windows.Forms.Application.ThreadContext.OnThreadException(Exception t) at System.Windows.Forms.Control.WndProcException(Exception e) at System.Windows.Forms.ControlNativeWindow.OnThreadException(Exception e) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.SafeNativeMethods.ShowWindow(Handle Ref hWnd, Int32 nCmdShow) at System.Windows.Forms.Control.SetVisibleCore(Boolean value) at System.Windows.Forms.Form.SetVisibleCore(Boolean value) at System.Windows.Forms.Form.Show(IWin32Window owner) .... What's going on and how do I make my project load on all .NET Runtimes?

    Read the article

  • FluentNHibernate, getting 1 column from another table

    - by puffpio
    We're using FluentNHibernate and we have run into a problem where our object model requires data from two tables like so: public class MyModel { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual int FooId { get; set; } public virtual string FooName { get; set; } } Where there is a MyModel table that has Id, Name, and FooId as a foreign key into the Foo table. The Foo tables contains Id and FooName. This problem is very similar to another post here: http://stackoverflow.com/questions/1896645/nhibernate-join-tables-and-get-single-column-from-other-table but I am trying to figure out how to do it with FluentNHibernate. I can make the Id, Name, and FooId very easily..but mapping FooName I am having trouble with. This is my class map: public class MyModelClassMap : ClassMap<MyModel> { public MyModelClassMap() { this.Id(a => a.Id).Column("AccountId").GeneratedBy.Identity(); this.Map(a => a.Name); this.Map(a => a.FooId); // my attempt to map FooName but it doesn't work this.Join("Foo", join => join.KeyColumn("FooId").Map(a => a.FooName)); } } with that mapping I get this error: The element 'class' in namespace 'urn:nhibernate-mapping-2.2' has invalid child element 'join' in namespace 'urn:nhibernate-mapping-2.2'. List of possible elements expected: 'joined-subclass, loader, sql-insert, sql-update, sql-delete, filter, resultset, query, sql-query' in namespace 'urn:nhibernate-mapping-2.2'. any ideas?

    Read the article

  • RMI java doesn't create skeleton class

    - by pavithra
    I wrote a remote service MyremoteImpl.java and used following command after compiled it. rmic MyRemoteImpl I learned that this method suppose to create stub class and a skeleton class but I can only see the stub class, why is that? The other problem I faced after run rmiregistry I started the service but it gives following error, I doubt I get this error as I'm missing skeleton class? java.net.MalformedURLException: invalid URL String: Remote Hello at java.rmi.Naming.parseURL(Unknown Source) at java.rmi.Naming.rebind(Unknown Source) at RMIservice.MyRemoteImpl.main(MyRemoteImpl.java:22) Caused by: java.net.URISyntaxException: Illegal character in path at index 6: Remote Hello at java.net.URI$Parser.fail(Unknown Source) at java.net.URI$Parser.checkChars(Unknown Source) at java.net.URI$Parser.parseHierarchical(Unknown Source) at java.net.URI$Parser.parse(Unknown Source) at java.net.URI.<init>(Unknown Source) at java.rmi.Naming.intParseURL(Unknown Source) ... 3 more Please help me to solve this, Thanx in advance!!!

    Read the article

  • Is javac enough to build an OSGi bundle?

    - by Chatanga
    To produce a bundle from source using a tool such as javac, you need to provide it with a linear classpath. Unfortunately, it won't work in some situations still perfectly legal from an OSGi point of view: dependencies with embedded JAR in them; same packages contained by different dependencies. Since javac doesn't understand OSGi metadata, I won't be able to simply but the dependencies in a classpath. A finer package grained approach seems necessary. How this problem is addressed by people using OSGi in an automated process (continuous integration)? Strangely, there is a lot of resources on the web on how to create bundle JAR (creating the metadata, creating the JAR) provided you have the classes/inner JAR to put inside, but very few things on how actually get theses classes compiled.

    Read the article

  • iPhone: Cannot get simulator to generate .gcda profiling data files.

    - by Derek Clarkson
    I'm attempting to profile my code using the iPhone simulator. I've enabled Generate Test Coverage File and Instrument Program Flow and added -lgcov to the linker flags. According to everything I've read that should be all I need to do in terms of setup. Executing the program I can see the .gcno files appearing along side the .o compiled code in the build/.build/Debug-iphonesimulator/.build/Objects-normal/i386 directory. But when I run the app in the simulator I do not get any *.gcda files appearing. My understanding is that these files contain the data from the instrumentation. But I cannot find them anywhere on the computer. I know they can be produced and appear along side the *.gcno files because I have an old trashed buil directory which does have them. I just cannot figure out what I have to do to get them to appear and record the run. Any help appreciated.

    Read the article

  • Does complex JOINs causes high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Can I mix static and shared-object libraries when linking?

    - by SiegeX
    I have a C project that produces ten executables, all of which I would like to be statically linked. The problem I am facing is that one of these executables uses a 3rd-party library of which only the shared-object version is available. If I pass the -static flag to gcc, ld will error saying it can't find the library in question (I presume it's looking for the .a version) and the executable will not be built. Ideally, I would like to be able to tell 'ld' to statically link as much as it can and fail over to the shared object library if a static library cannot be found. In the interium I tried something like gcc -static -lib1 -lib2 -shared -lib3rdparty foo.c -o foo.exe in hopes that 'ld' would statically link in lib1 and lib2 but only have a run-time dependence on lib3rdparty. Unfortunatly, this did not work as I intended; instead the -shared flag overwrote the -static flag and everything was compiled as shared-objects. Is statically linking an all-or-nothing deal, or is there some way I can mix and match?

    Read the article

  • When building a web application project, TFS 2008 builds two separate projects in _PublishedFolder.

    - by Steve Johnson
    I am trying to perform build automation on one of my web application projects built using VS 2008. The _PublishedWebSites contains two folders: Web and Deploy. I want TFS 2008 to generate only the deploy folder and not the web folder. Here is my TFSBuild.proj file: <Project ToolsVersion="3.5" DefaultTargets="Compile" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Import Project="$(MSBuildExtensionsPath)\Microsoft\WebDeployment\v9.0\Microsoft.WebDeployment.targets" /> <ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|AnyCPU"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>Any CPU</PlatformToBuild> </ConfigurationToBuild> </ItemGroup> <!--<ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|x64"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>x64</PlatformToBuild> </ConfigurationToBuild> </ItemGroup>--> <ItemGroup> <AdditionalReferencePath Include="C:\3PR" /> </ItemGroup> <Target Name="GetCopyToOutputDirectoryItems" Outputs="@(AllItemsFullPathWithTargetPath)" DependsOnTargets="AssignTargetPaths;_SplitProjectReferencesByFileExistence"> <!-- Get items from child projects first. --> <MSBuild Projects="@(_MSBuildProjectReferenceExistent)" Targets="GetCopyToOutputDirectoryItems" Properties="%(_MSBuildProjectReferenceExistent.SetConfiguration); %(_MSBuildProjectReferenceExistent.SetPlatform)" Condition="'@(_MSBuildProjectReferenceExistent)'!=''"> <Output TaskParameter="TargetOutputs" ItemName="_AllChildProjectItemsWithTargetPathNotFiltered"/> </MSBuild> <!-- Remove duplicates. --> <RemoveDuplicates Inputs="@(_AllChildProjectItemsWithTargetPathNotFiltered)"> <Output TaskParameter="Filtered" ItemName="_AllChildProjectItemsWithTargetPath"/> </RemoveDuplicates> <!-- Target outputs must be full paths because they will be consumed by a different project. --> <CreateItem Include="@(_AllChildProjectItemsWithTargetPath->'%(FullPath)')" Exclude= "$(BuildProjectFolderPath)/../../Development/Main/Web/Bin*.pdb; *.refresh; *.vshost.exe; *.manifest; *.compiled; $(BuildProjectFolderPath)/../../Development/Main/Web/Auth/MySoftware.dll; $(BuildProjectFolderPath)/../../Development/Main/Web/BinApp_Web_*.dll;" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always' or '%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'" > <Output TaskParameter="Include" ItemName="AllItemsFullPathWithTargetPath"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectoryAlways" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always'"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectory" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"/> </CreateItem> </Target> <!-- To modify your build process, add your task inside one of the targets below and uncomment it. Other similar extension points exist, see Microsoft.WebDeployment.targets. <Target Name="BeforeBuild"> </Target> <Target Name="BeforeMerge"> </Target> <Target Name="AfterMerge"> </Target> <Target Name="AfterBuild"> </Target> --> </Project> I want to build everything that the builtin Deploy project is doing for me. But I don't want the generated web project as it contains App_Web_xxxx.dll assemblies instead of a single compiled assembly. How can I do this?

    Read the article

  • Unable to reserve a job with beanstalkd

    - by Brian Hogg
    I have tried on two different servers to get beanstalkd up and running and do a couple tests (locally on MacOSX compiled from source, and on a CentOS server installed with yum) I can get the server running either with sudo beanstalkd -d -p 11300 or sudo beanstalkd -p 11300 & I then tried using the php lib and it just froze. Connecting directly: telnet localhost 11300 I do the following to mimic the PHP test script: use foo USING foo put 0 0 120 5 hello INSERTED 1 reserve-with-timeout 0 TIMED_OUT If I just run reserve It's stuck indefinitely. Any ideas?

    Read the article

  • Oracle Date Format Convert Hour-Minute to Interval and Disregard Year-Month-Day

    - by dlite922
    I need to compare an event's half-way midpoint between a start and stop time of day. Right now i'm converting the dates you see on the right, to HH:MM and the comparison works until midnight. the query says: WHERE half BETWEEN pStart and pStop. As you can see below, pStart and pStap have January 1st 2000 dates, this is because the year month day are not important to me... Valid Data: +-------+--------+-------+---------------------+---------------------+---------------------+ | half | pStart | pStop | half2 | pStart2 | pStop2 | +-------+--------+-------+---------------------+---------------------+---------------------+ | 19:00 | 19:00 | 23:00 | 2012-11-04 19:00:00 | 2000-01-01 19:00:00 | 2000-01-01 23:00:00 | | 20:00 | 19:00 | 23:00 | 2012-11-04 20:00:00 | 2000-01-01 19:00:00 | 2000-01-01 23:00:00 | | 21:00 | 19:00 | 23:00 | 2012-11-04 21:00:00 | 2000-01-01 19:00:00 | 2000-01-01 23:00:00 | | 23:00 | 20:00 | 23:00 | 2012-11-05 23:00:00 | 2000-01-01 20:00:00 | 2000-01-01 23:00:00 | +-------+--------+-------+---------------------+---------------------+---------------------+ Now observe what happens when pStop is midnight or later... Valid Data that breaks it: +-------+--------+-------+---------------------+---------------------+---------------------+ | half | pStart | pStop | half2 | pStart2 | pStop2 | +-------+--------+-------+---------------------+---------------------+---------------------+ | 23:00 | 22:00 | 00:00 | 2012-11-04 23:00:00 | 2000-01-01 22:00:00 | 2000-01-01 00:00:00 | | 23:30 | 23:00 | 02:00 | 2012-11-05 23:30:00 | 2000-01-01 23:00:00 | 2000-01-01 02:00:00 | +-------+--------+-------+---------------------+---------------------+---------------------+ Thus my where clause translates to: WHERE 19:00 BETWEEN 22:00 AND 00:00 ...which returns false and I miss those two correct rows above. Question: Is there a way to show those dates as integer interval so that saying half BETWEEN pStart and pStop are correct? I thought about adding 24 when pStop is less than pStart to make 00:00 into 24:00 but don't know an easy way to do that without long string concatenations and number conversions. This would solve the problem because pStart pStop difference will never be longer than 6 hours. Note: (The Query is much more complex. It has other irrelevant date calculations, but the result are show above. DATE_FORMAT(%H:%i) is applied to the first three columns and no formatting to the last three) Thanks for your help:

    Read the article

  • Standard way of distributing source code?

    - by penyuan
    I am relatively new to programming, and have built a few working C++ commandline programs with Xcode in Mac OS X (no dependencies on Mac-only libraries or APIs). My question is: What is the standard way of packaging and distributing the source code (and possibly compiled binaries)? i.e. Almost all Linux programs seemed to be distributed that a user simply needs to run ./configure && make && make install from the source directory. Thank you.

    Read the article

  • code to ping websites works sometimes ...

    - by trustfundbaby
    I'm testing out a piece of code to ping a bunch of websites I own on a regular basis, to make sure they're up. I'm using rails and so far I have this hideous test action that I'm using to try it out (see below). The problem though, is that sometimes it works, and other times it won't ... sometimes it runs through the code just fine, other times, it seems to completely ignore the begin/rescue block ... a. I need help figuring out what the problem is b. And refactoring this to make it look respectable. Your help is much appreciated. require 'net/http' require 'uri' def ping @sites = NewsSource.all @sites.each do |site| if site.uri and !site.uri.empty? uri = URI.parse(site.uri) response = nil path = uri.path.blank? ? '/' : uri.path path = uri.query.blank? ? path : "#{path}?#{uri.query}" begin Net::HTTP.start(uri.host, uri.port) {|http| http.open_timeout = 30 http.read_timeout = 30 response = http.head(path) } if response.code.eql?('200') or response.code.eql?('301') or response.code.eql?('302') site.up = true else site.up = false end site.up_check_msg = response.message site.up_check_code = response.code rescue Errno::EBADF rescue Timeout::Error site.up = false site.up_check_msg = 'timeout' site.up_check_code = '408' end site.up_check_time = 0.seconds.ago site.save end end end

    Read the article

  • What information should a SVN/Versioned file commit comment contain?

    - by RenderIn
    I'm curious what kind of content should be in a versioned file commit comment. Should it describe generally what changed (e.g. "The widget screen was changed to display only active widgets") or should it be more specific (e.g. "A new condition was added to the where clause of the fetchWidget query to retrieve only active widgets by default") How atomic should a single commit be? Just the file containing the updated query in a single commit (e.g. "Updated the widget screen to display only active widgets by default"), or should that and several other changes + interface changes to a screen share the same commit with a more general description like ("Updated the widget screen: A) display only active widgets by default B) added button to toggle showing inactive widgets") I see subversion commit comments being used very differently and was wondering what others have had success with. Some comments are as brief as "updated files", while others are many paragraphs long, and others are formatted in a way that they can be queried and associated with some external system such as JIRA. I used to be extremely descriptive of the reason for the change as well as the specific technical changes. Lately I've been scaling back and just giving a general "This is what I changed on this page" kind of comment.

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >