Search Results

Search found 11146 results on 446 pages for 'dynamic queries'.

Page 397/446 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • What's the best way to get a bunch of rows from MySQL if you have an array of integer primary keys?

    - by Evan P.
    I have a MySQL table with an auto-incremented integer primary key. I want to get a bunch of rows from the table based on an array of integers I have in memory in my program. The array ranges from a handful to about 1000 items. What's the most efficient query syntax to get the rows? I can think of a few: "SELECT * FROM thetable WHERE id IN (1, 2, 3, 4, 5)" (this is what I do now) "SELECT * FROM thetable where id = 1 OR id = 2 OR id = 3" Multiple queries of the form "SELECT * FROM thetable WHERE id = 1". Probably the most friendly to the query cache, but expensive due to having lots of query parsing. A union, like "SELECT * FROM thetable WHERE id = 1 UNION SELECT * FROM thetable WHERE id = 2 ..." I'm not sure if MySQL caches the results of each query; it's also the most verbose format. I think using the NoSQL interface in MySQL 5.6+ would be the most efficient way to do this, but I'm not yet up to MySQL 5.6.

    Read the article

  • Is it possible to load an entire SQL Server CE database into RAM?

    - by DanM
    I'm using LinqToSql to query a small SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table via a foreign key, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of the appeal of LinqToSql. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view.

    Read the article

  • Tricky SQL query - need to get time frames

    - by Andrew
    I am stumbled upon a problem, when I need a query which will produce a list of speeding time frames. Here is the data example [idgps_unit_location] [dt] [idgps_unit] [lat] [long] [speed_kmh] 26 10/18/2012 18:53 2 47 56 30 27 10/18/2012 18:53 2 49 58 31 28 10/18/2012 18:53 2 28 37 15 29 10/18/2012 18:54 2 56 65 33 30 10/18/2012 18:54 2 152 161 73 31 10/18/2012 18:55 2 134 143 64 32 10/18/2012 18:56 2 22 31 12 36 10/18/2012 18:59 2 98 107 47 37 10/18/2012 18:59 2 122 131 58 38 10/18/2012 18:59 2 91 100 44 39 10/18/2012 19:00 2 190 199 98 40 10/18/2012 19:01 2 194 203 101 41 10/18/2012 19:02 2 182 191 91 42 10/18/2012 19:03 2 162 171 78 43 10/18/2012 19:03 2 174 183 83 44 10/18/2012 19:04 2 170 179 81 45 10/18/2012 19:05 2 189 198 97 46 10/18/2012 19:06 2 20 29 10 47 10/18/2012 19:07 2 158 167 76 48 10/18/2012 19:08 2 135 144 64 49 10/18/2012 19:08 2 166 175 79 50 10/18/2012 19:09 2 9 18 5 51 10/18/2012 19:09 2 101 110 48 52 10/18/2012 19:09 2 10 19 7 53 10/18/2012 19:10 2 32 41 20 54 10/18/2012 19:10 1 54 63 85 55 10/19/2012 19:11 2 55 64 50 I need a query that would convert this table into the following report that shows frames of time when speed was 80: [idgps_unit] [dt_start] [lat_start] [long_start] [speed_start] [dt_end] [lat_end] [long_end] [speed_end] [speed_average] 2 10/18/2012 19:00 190 199 98 10/18/2012 19:02 182 191 91 96.66666667 2 10/18/2012 19:03 174 183 83 10/18/2012 19:05 189 198 97 87 1 10/18/2012 19:10 54 63 85 10/18/2012 19:10 54 63 85 85 Now, what have I tried? I tried putting this into separate tables, queries and do some joins... Nothing works and I am very frustrated... I am not even sure if this could be done via the query. Asking for the expert help!

    Read the article

  • How can I skip some block content while reading in Perl.

    - by Nano HE
    Hello. I plan to skip the block content which include the start line of "MaterializeU4()" with the subroutin() read_block below. But failed. # Read a constant definition block from a file handle. # void return when there is no data left in the file. # Otherwise return an array ref containing lines to in the block. sub read_block { my $fh = shift; my @lines; my $block_started = 0; while( my $line = <$fh> ) { # how to correct my code below? I don't need the 2nd block content. $block_started++ if ( ($line =~ /^(status)/) && (index($line, "MaterializeU4") != 0) ) ; if( $block_started ) { last if $line =~ /^\s*$/; push @lines, $line; } } return \@lines if @lines; return; } Data as below: __DATA__ status DynTest = <dynamic 100> vid = 10002 name = "DynTest" units = "" status VIDNAME9000 = <U4 MaterializeU4()> vid = 9000 name = "VIDNAME9000" units = "degC" status DynTest = <U1 100> vid = 100 name = "Hello" units = "" Output: <StatusVariables> <SVID logicalName="DynTest" type="L" value="100" vid="10002" name="DynTest" units=""></SVID> <SVID logicalName="VIDNAME9000" type="L" value="MaterializeU4()" vid="9000" name="VIDNAME9000" units="degC"></SVID> <SVID logicalName="DynTest" type="L" value="100" vid="100" name="Hello" units=""></SVID> </StatusVariables> [Updated] I print the value of index($line, "MaterializeU4"), it output 25. Then I updated the code as below $block_started++ if ( ($line =~ /^(status)/) && (index($line, "MaterializeU4") != 25) Now it works. Any comments are welcome about my practice. Thank you.

    Read the article

  • How to translate the fields of a database model?

    - by Tõnis M
    I have some tables/models in a web app that will have multilingual content. For example a university, with it's description in a default language(english) and the user wants he can see the same information in another language( if the object has it's fields translated). If there were only a few languages then I would just add fields like name_en and name_de and so on, but the number of languages isn't fixed, so that' would create a mess. I could also just create a new object with the translated data but then foreign keys wouldn't work, and only some of the fields can be translated so that would create duplicate data. Storing the translations in a file and using gettext or something similar is also not an option since the objects fields can be translated by the website user, not only developers/admins. What would be the best way to design/architect such a database? Searching from the translated data should also be not too complex - as it should not require creating complex joins which would make the queries slower I'm using PostgreSQL and Ruby of Rails but I'm not looking for a technical solution but for a general idea how to design it.

    Read the article

  • ASP.Net MVC: Change Routes dynamically

    - by Frank
    Hi, usually, when I look at a ASP.Net MVC application, the Route table gets configured at startup and is not touched ever after. I have a couple of questions on that but they are closely related to each other: Is it possible to change the route table at runtime? How would/should I avoid threading issues? Is there maybe a better way to provide a dynamic URL? I know that IDs etc. can appear in the URL but can't see how this could be applicable in what I want to achieve. How can I avoid that, even though I have the default controller/action route defined, that default route doesn't work for a specific combination, e.g. the "Post" action on the "Comments" controller is not available through the default route? Background: Comment Spammers usually grab the posting URL from the website and then don't bother to go through the website anymore to do their automated spamming. If I regularly modify my post URL to some random one, spammers would have to go back to the site and find the correct post URL to try spamming. If that URL changes constantly I'd think that that could make the spammers' work more tedious, which should usually mean that they give up on the affected URL.

    Read the article

  • Creating a clickable progress bar

    - by Cory
    What I'm essentially building is a webkit based controller that communicates with a program called Ecoute.app. The controller has a progressbar that indicates the progress made through the currently playing song, but I want to make this bar's position adjustable with a click. function barClick() { var progress = Ecoute.playerPosition(); var width = 142.5; var percentElapsed = progress / length; var position = width * percentElapsed; Ecoute.setPlayerPosition(position); } Is what I have, with Ecoute.playerPosition() returning the player's current position. Width has previously been defined as a dynamic value at var width = 142.5 / length * progress + 1.63; With length being the current track length and progress being the player's position. This has been able to dynamically stretch a progression overlay image to indicate the position of the track via the desktop controller. However, the max width used in the second function does not appear to allow the first function to work properly. Any help possibly determining the correct value or approach would be hugely appreciated.

    Read the article

  • jsf and eclipse setup. Lib in Explorer is different than lib in tomcat. How come?

    - by user384706
    Hi, I am starter in JSF2.0, and I have a question related to Eclipse (I am using Helios). 1)I create a Dynamic Project 2)I add JSF project facet. 3)I choose JSF user library (I have created it using MyFaces) All ok so far. I notice though that in the Project Explorer in WebContent/WEB-INF/lib the lib directory is empty instead of having MyFaces jars. The application works fine though. I looked into this and the jars are actually being placed in the corresponding lib directory of the app deployed under wtpwebapps of the Tomcat instance of eclipse in the .pluggins directory. Ok, it works but IMHO it is incorrect to have the Project Explorer inconsistent with the directories actually deployed. I.e. lib is shown empty in Project Explorer but with jars under wtpwebapps. Am I wrong to dislike this inconsistency? Is this how it should work or am I doing something wrong in the way I am setting my project? Thanks!

    Read the article

  • newbie hibernate first level cache confusion

    - by Bruce
    Hi all I'm just geting to grips with hibernate. Little bit confused. I just wanted to watch the operation of the first level cache, which I understood to batch up queries until the end of the session. But if I create an object, hibernate saves it immediately, so that when I later update it in the same transaction, it has to do an update too: Session session = factory.getCurrentSession(); session.beginTransaction(); Test1 test1 = new Test1(); test1.setName("Test 1"); test1.setValue(10); // Touch it session.save(test1); System.out.println("At checkpoint 1"); test1.setValue(20); session.getTransaction().commit(); I see the sql for the save, then 'At checkpoint 1', then the sql for the update. Do I have something set up wrong or am I misunderstanding hibernate's first level cache? Is there a good document on the first level cache - I didn't find anything in the hibernate docs, but I could easily have missed it.. Thanks!

    Read the article

  • jquery .on() toggle div

    - by lollo
    i'm developing a blog with dynamic content loaded via ajax request and i'm trying to hide/show the "post comment" form, of the article, with a button : <div id="post"> ... <ol id="update" class="timeline"> <!-- comments here --> </ol> <button class="mybutton Bhome">Commenta</button><!-- hide/show button --> <div class="mycomment"><!-- div to hide/show --> <form action="#" method="post"> Nome: <input type="text" size="12" id="name" /><br /> Email: <input type="text" size="12" id="email" /><br /> <textarea rows="10" cols="50" class="mytext" id="commentArea"></textarea><br /> <input type="submit" class="Bhome" value=" Post " /> </form> </div><!-- mycomment --> </div><!-- post --> This jquery code works well but has effect on All the posts... $("div").on("click", ".mybutton", function(e){ $(".mycomment").slideToggle("slow"); e.stopPropagation(); }); I want that only the clicked hide/show button has effect on the related article but f i have more than one article on my page the button with .mybutton class hides or shows all the comment form of all the articles.

    Read the article

  • Sql Server 2005 Check Constraint not being applied in execution when using variables

    - by DarylS
    Here is some SQL sample code: --Create 2 Sales tables with constraints based on the saledate create table Sales1(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales1 ADD CONSTRAINT CK_Sales1 CHECK (([SaleDate]>='01 May 2010')) GO create table Sales2(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales2 ADD CONSTRAINT CK_Sales2 CHECK (([SaleDate]<'01 May 2010')) GO --Insert some data into Sales1 insert into Sales1 (SaleDate, Amount) values ('02 May 2010', 50) insert into Sales1 (SaleDate, Amount) values ('03 May 2010', 60) GO --Insert some data into Sales2 insert into Sales2 (SaleDate, Amount) values ('30 Mar 2010', 10) insert into Sales2 (SaleDate, Amount) values ('31 Mar 2010', 20) GO --Create a view that combines these 2 tables create VIEW [dbo].[Sales] AS SELECT SaleDate, Amount FROM Sales1 UNION ALL SELECT SaleDate, Amount FROM Sales2 GO --Get the results --Query 1 select * from Sales where SaleDate < '31 Mar 2010' -- if you look at the execution plan this query only looks at Sales2 (Which is good) --Query 2 DECLARE @SaleDate datetime SET @SaleDate = '31 Mar 2010' select * from Sales where SaleDate < @SaleDate -- if you look at the execution plan this query looks at Sales1 and Sales2 (Which is NOT good) Looking at the execution plan you will see that the two queries are differnt. For Query 1 the only table that is accessed is Sales1 (which is good). For Query 2 both tables are accessed (Which is bad). Why are these execution plans different, and how do i get Query 2 to only access the relevant table when variables are used? I have tried to add indexes for the SaleDate column and that does not seem to help.

    Read the article

  • What is the return type of my linq query?

    - by Ulhas Tuscano
    I have two tables A & B. I can fire Linq queries & get required data for individual tables. As i know what each of the tables will return as shown in example. But, when i join both the tables i m not aware of the return type of the Linq query. This problem can be solved by creating a class which will hold ID,Name and Address properties inside it. but,everytime before writing a join query depending on the return type i will have to create a class which is not a convinient way Is there any other mathod available to achieve this private IList<A> GetA() { var query = from a in objA select a; return query.ToList(); } private IList<B> GetB() { var query = from b in objB select b; return query.ToList(); } private IList<**returnType**?> GetJoinAAndB() { var query = from a in objA join b in objB on a.ID equals b.AID select new { a.ID, a.Name, b.Address }; return query.ToList(); }

    Read the article

  • Optimized 2D Tile Scrolling in OpenGL

    - by silicus
    Hello, I'm developing a 2D sidescrolling game and I need to optimize my tiling code to get a better frame rate. As of right now I'm using a texture atlas and 16x16 tiles for 480x320 screen resolution. The level scrolls in both directions, and is significantly larger than 1 screen (thousands of pixels). I use glTranslate for the actual scrolling. So far I've tried: Drawing only the on-screen tiles using glTriangles, 2 per square tile (too much overhead) Drawing the entire map as a Display List (great on a small level, way to slow on a large one) Partitioning the map into Display Lists half the size of the screen, then culling display lists (still slows down for 2-directional scrolling, overdraw is not efficient) Any advice is appreciated, but in particular I'm wondering: I've seen Vertex Arrays/VBOs suggested for this because they're dynamic. What's the best way to take advantage of this? If I simply keep 1 screen of vertices plus a bit of overdraw, I'd have to recopy the array every few frames to account for the change in relative coordinates (shift everything over and add the new rows/columns). If I use more overdraw this doesn't seem like a big win; it's like the half-screen display list idea. Does glScissor give any gain if used on a bunch of small tiles like this, be it a display list or a vertex array/VBO Would it be better just to build the level out of large textures and then use glScissor? Would losing the memory saving of tiling be an issue for mobile development if I do this (just curious, I'm currently on a PC)? This approach was mentioned here Thanks :)

    Read the article

  • Magento: The best way to hook into the checkout process

    - by dan.codes
    I am integrating with a third party order management system and I have to make calls to it throughout the checkout process. The problem is, I don't think there are many events available because of how the onepage checkout is all done in javascript/ajax calls. There are a few like after saving the shipping method, and none of the dynamic events seem to fit either. basically I need to know as soon as the user is getting access to the shipping method tab to pass the billing shipping address over, then after the shipping method, to pass that over. Obviously there is an event for that. I know there are ones for when you submit an order so that should be good. I guess I only need to know when the billing/shipping address is saved. I was using controller_action_layout_render_before_checkout_onepage_progress but the progress gets called way to late. It just doesn't seem like there are a lot of hooks through the onepage checkout. if anyone can give me some examples of what they have done that would be great!

    Read the article

  • All connections in pool are in use

    - by veljkoz
    We currently have a little situation on our hands - it seems that someone, somewhere forgot to close the connection in code. Result is that the pool of connections is relatively quickly exhausted. As a temporary patch we added Max Pool Size = 500; to our connection string on web service, and recycle pool when all connections are spent, until we figure this out. So far we have done this: SELECT SPId FROM MASTER..SysProcesses WHERE DBId = DB_ID('MyDb') and last_batch < DATEADD(MINUTE, -15, GETDATE()) to get SPID's that aren't used for 15 minutes. We're now trying to get the query that was executed last using that SPID with: DBCC INPUTBUFFER(61) but the queries displayed are various, meaning either something on base level regarding connection manipulation was broken, or our deduction is erroneous... Is there an error in our thinking here? Does the DBCC / sysprocesses give results we're expecting or is there some side-effect catch? (for example, connections in pool influence?) (please, stick to what we could find out using SQL since the guys that did the code are many and not all present right now)

    Read the article

  • using threads in menu options

    - by vbNewbie
    I have an app that has a console menu with 2/3 selections. One process involves uploading a file and performing a lengthy search process on its contents, whilst another process involves SQL queries and is an interactive process with the user. I wish to use threads to allow one process to run and the menu to offer the option for the second process to run. However you cannot run the first process twice. I have created threads and corrected some compilation errors but the threading options are not working correctly. Any help appreciated. main... Dim tm As Thread = New Thread(AddressOf loadFile) Dim ts As Thread = New Thread(AddressOf reports) .... While Not response.Equals("3") Try Console.Write("Enter choice: ") response = Console.ReadLine() Console.WriteLine() If response.Equals("1") Then Console.WriteLine("Thread 1 doing work") tm.SetApartmentState(ApartmentState.STA) tm.IsBackground = True tm.Start() response = String.Empty ElseIf response.Equals("2") Then Console.WriteLine("Starting a second Thread") ts.Start() response = String.Empty End If ts.Join() tm.Join() Catch ex As Exception errormessage = ex.Message End Try End While I realize that a form based will be easier to implement with perhaps just calling different forms to handle the processes.But I really dont have that option now since the console app will be added to api later. But here are my two processes from the menu functions. Also not sure what to do with the boolean variabel again as suggested below. Private Sub LoadFile() Dim dialog As New OpenFileDialog Dim response1 As String = Nothing Dim filepath As String = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments) dialog.InitialDirectory = filepath If dialog.ShowDialog() = DialogResult.OK Then fileName = dialog.FileName ElseIf DialogResult.Cancel Then Exit Sub End If Console.ResetColor() Console.Write("Begin Search -- Discovery Search, y or n? ") response1 = Console.ReadLine() If response1 = "y" Then Search() ElseIf response1 = "n" Then Console.Clear() main() End If isRunning = False End Sub and the second one Private Shared Sub report() Dim rptGen As New SearchBlogDiscovery.rptGeneration Console.WriteLine("Tread Process started") rptGen.main() Console.WriteLine("Thread Process ended") isRunning = False End Sub

    Read the article

  • How can I ignore the block content reading in Perl.

    - by Nano HE
    Hello. I plan to ignore the block content which include the start line of "MaterializeU4()" with the subroutin() read_block below. But failed. # Read a constant definition block from a file handle. # void return when there is no data left in the file. # Otherwise return an array ref containing lines to in the block. sub read_block { my $fh = shift; my @lines; my $block_started = 0; while( my $line = <$fh> ) { # how to correct my code below? I don't need the 2nd block content. $block_started++ if ( ($line =~ /^(status)/) && (index($line, "MaterializeU4") != 0) ) ; if( $block_started ) { last if $line =~ /^\s*$/; push @lines, $line; } } return \@lines if @lines; return; } Data as below: __DATA__ status DynTest = <dynamic 100> vid = 10002 name = "DynTest" units = "" status VIDNAME9000 = <U4 MaterializeU4()> vid = 9000 name = "VIDNAME9000" units = "degC" status DynTest = <U1 100> vid = 100 name = "Hello" units = "" Output: <StatusVariables> <SVID logicalName="DynTest" type="L" value="100" vid="10002" name="DynTest" units=""></SVID> <SVID logicalName="VIDNAME9000" type="L" value="MaterializeU4()" vid="9000" name="VIDNAME9000" units="degC"></SVID> <SVID logicalName="DynTest" type="L" value="100" vid="100" name="Hello" units=""></SVID> </StatusVariables> [Updated] I print the value of index($line, "MaterializeU4"), it output 25. Then I updated the code as below $block_started++ if ( ($line =~ /^(status)/) && (index($line, "MaterializeU4") != 25) Now it works. Any comments are welcome about my practice. Thank you.

    Read the article

  • Advice on Minimizing Stored Procedure Parameters

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC Web Application that interacts with a SQL Server 2008 database via Entity Framework 4.0. On a particular page, i call a stored procedure in order to pull back some results based on selections on the UI. Now, the UI has around 20 different input selections, ranging from a textbox, dropdown list, checkboxes, etc. Each of those inputs are "grouped" into logical sections. Example: Search box : "Foo" Checkbox A1: ticked, Checkbox A2: unticked Dropdown A: option 3 selected Checkbox B1: ticked, Checkbox B2: ticked, Checkbox B3: unticked So i need to call the SPROC like this: exec SearchPage_FindResults @SearchQuery = 'Foo', @IncludeA1 = 1, @IncludeA2 = 0, @DropDownSelection = 3, @IncludeB1 = 1, @IncludeB2 = 1, @IncludeB3 = 0 The UI is not too important to this question - just wanted to give some perspective. Essentially, i'm pulling back results for a search query, filtering these results based on a bunch of (optional) selections a user can filter on. Now, My questions/queries: What's the best way to pass these parameters to the stored procedure? Are there any tricks/new ways (e.g SQL Server 2008) to do this? Special "table" parameters/arrays - can we pass through User-Defined-Types? Keep in mind im using Entity Framework 4.0 - but could always use classic ADO.NET for this if required. What about XML? What are the serialization/de-serialization costs here? Is it worth it? How about a parameter for each logical section? Comma-seperated perhaps? Just thinking out loud. This page is particulary important from a user point of view, and needs to perform really well. The stored procedure is already heavy in logic, so i want to minimize the performance implications - so keep that in mind. With that said - what is the best approach here?

    Read the article

  • google maps in jquery mobile

    - by paullb
    When showing a google map in jquery mobile it appears (after reading the forums) that code like the following is required: <div data-role="page" data-theme="b" class="page-map" style="width:100%; height:100%;"> <div data-role="header"><h1>Map Page</h1></div> <div data-role="content" style="width:100%; height:100%; padding:0;"> <div id="map_canvas" style="width:100%; height:100%;"></div> </div> </div> Taking away the height on outside div causes the div to drop to height 0 and the map not being displayed. I was hoping to get some dynamic text below the map (based on the contents on the) which length I do not know so I can't set an absolute height. Has anyone got a workaround for this problem?

    Read the article

  • page variable in a repeater

    - by carrot_programmer_3
    Hi I'm having a bit of an issue with a asp.net repeater I'm building a categories carousel with the dynamic categories being output by a repeater. Each item is a LinkButton control that passes an argument of the category id to the onItemClick handler. a page variable is set by this handler to track what the selected category id is.... public String SelectedID { get { object o = this.ViewState["_SelectedID"]; if (o == null) return "-1"; else return (String)o; } set { this.ViewState["_SelectedID"] = value; } } problem is that i cant seem to read this value while iterating through the repeater as follows... <asp:Repeater ID="categoriesCarouselRepeater" runat="server" onitemcommand="categoriesCarouselRepeater_ItemCommand"> <ItemTemplate> <%#Convert.ToInt32(Eval("CategoryID")) == Convert.ToInt32(SelectedID) ? "<div class=\"selectedcategory\">":"<div>"%> <asp:LinkButton ID="LinkButton1" CommandName="select_category" CommandArgument='<%#Eval("CategoryID")%>' runat="server"><img src="<%#Eval("imageSource")%>" alt="category" /><br /> </div> </ItemTemplate> </asp:Repeater> calling <%=SelectedID%> in the item template works but when i try the following expression the value of SelectedID returns empty.. <%#Convert.ToInt32(Eval("CategoryID")) == Convert.ToInt32(SelectedID) ? "match" : "not a match"%> the value is being set as follows... protected void categoriesCarouselRepeater_ItemCommand(object source, RepeaterCommandEventArgs e) { SelectedID = e.CommandArgument); } Any ideas whats wrong here?

    Read the article

  • PHP, MySQL, Memcache / Ajax Scaling Problem

    - by Jeff Andersen
    I'm building a ajax tic tac toe game in PHP/MySQL. The premise of the game is to be able to share a url like mygame.com/123 with your friends and you play multiple simultaneous games. The way I have it set up is that a file (reload.php) is being called every 3 seconds while the user is viewing their game board space. This reload.php builds their game boards and the output (html) replaces their current game board (thus showing games in which it is their turn) Initially I built it entirely with PHP/MySQL and had zero caching. A friend gave me a suggestion to try doing all of the temporary/quick read information through memcache (storing moves, and ID matchups) and then building the game boards from this information. My issue is that, both solutions encounter a wall when there is roughly 30-40 active users with roughly 40-50 games running. It is running on a VPS from VPS.net with 2 nodes. (Dedicated CPU: 1.2GHz, RAM: 752MB) Each call to reload.php peforms 3 selects and 2 insert queries. The size of the data being pulled is negligible. The same actions happen on index.php to build the boards for the initial visit. Now that the backstory is done, my question is: Would there be a bottleneck in that each user is polling the same file every 3 seconds to rebuild their gameboards, and that all users are sitting on index.php from which the AJAX calls are made within the HTML. If so, is it possible to spread the users' calls out over a set of files designated to building the game boards (eg. reload1.php 2, 3 etc) and direct users to the appropriate file. Would this relieve the pressure? A long winded explanation; however, I didn't have anywhere else to ask. Thanks very much for any insight.

    Read the article

  • Best way to parse this particular string using awk / sed?

    - by Jack
    Hi, I need to get a particular version string from a file (call it version.lst) and use it to compare another in a shell script. For example sake, the file contains lines that look like this: V1.000 -- build date and other info here -- APP1 V1.000 -- build date and other info here -- APP2 V1.500 -- build date and other info here -- APP3 .. and so on. Let's say I am trying to grab the first version (in this case, V1.000) from APP1. Obviously, the versions can change and I want this to be dynamic. What I have right now works: var = `cat version.lst | grep " -- APP1" | grep -Eo V[0-9].[0-9]{3}` Pipe to grep will get the line containing APP1 and the second pipe to grep will get the version string. However, I hear grep is not the way to do this so I'd like to learn the best way using awk or sed. Any ideas? I am new to both and haven't found a tutorial easy enough to learn the syntax of it. Do they support egrep? Thanks!

    Read the article

  • how to add data in Table of Jasper Report using Java

    - by Areeb Gillani
    i am here to ask you just a simple question that i am trying to pass data to a jasper report using java but i dont know how to to, because the table data is very dynamic thats y cannot pass sql query. any idea for this. i have a 2D array of object type, where i have all the data... so how can i pass that... Thanx in advance...!:) ConnectionManager con = new ConnectionManager(); con.establishConnection(); String fileName = "Pmc_Bill.jrxml"; String outFileName = "OutputReport.pdf"; HashMap params = new HashMap(); params.put("PName", pname); params.put("PSerial", psrl); params.put("PGender",pgen); params.put("PPhone",pph); params.put("PAge",page); params.put("PRefer",pref); params.put("PDateR",dateNow); try { JasperReport jasperReport = JasperCompileManager.compileReport(fileName); if(jasperReport != null ) System.out.println("so far so good "); // Fill the report using an empty data source JasperPrint jasperPrint = JasperFillManager.fillReport(jasperReport, params, new JRTableModelDataSource(tbl.getModel()));//con.connection); try{ JasperExportManager.exportReportToPdfFile(jasperPrint, outFileName); System.out.printf("File exported sucessfully"); }catch(Exception e){ e.printStackTrace(); } JasperViewer.viewReport(jasperPrint); } catch (JRException e) { JOptionPane.showMessageDialog(null, e); e.printStackTrace(); System.exit(1); }

    Read the article

  • Large Switch statements: Bad OOP?

    - by Mystere Man
    I've always been of the opinion that large switch statements are a symptom of bad OOP design. In the past, I've read articles that discuss this topic and they have provided altnerative OOP based approaches, typically based on polymorphism to instantiate the right object to handle the case. I'm now in a situation that has a monsterous switch statement based on a stream of data from a TCP socket in which the protocol consists of basically newline terminated command, followed by lines of data, followed by an end marker. The command can be one of 100 different commands, so I'd like to find a way to reduce this monster switch statement to something more manageable. I've done some googling to find the solutions I recall, but sadly, Google has become a wasteland of irrelevant results for many kinds of queries these days. Are there any patterns for this sort of problem? Any suggestions on possible implementations? One thought I had was to use a dictionary lookup, matching the command text to the object type to instantiate. This has the nice advantage of merely creating a new object and inserting a new command/type in the table for any new commands. However, this also has the problem of type explosion. I now need 100 new classes, plus I have to find a way to interface them cleanly to the data model. Is the "one true switch statement" really the way to go? I'd appreciate your thoughts, opinions, or comments.

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >