Search Results

Search found 17537 results on 702 pages for 'doctrine query'.

Page 623/702 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • What Language Feature Can You Just Not Live Without?

    - by akdom
    I always miss python's built-in doc strings when working in other languages. I know this may seem odd, but it allows me to cut down significantly on excess comments while still providing a clean description of my code and any interfaces therein. What Language Feature Can You Just Not Live Without? If someone were building a new language and they asked you what one feature they absolutely must include, what would it be? This is getting kind of long, so I figured I'd do my best to summarize: Paraphrased to be language agnostic. If you know of a language which uses something mentioned, please at it in the parenthesis to the right of the feature. And if you have a better format for this list, by all means try it out (if it doesn't seem to work, I'll just roll back). Regular Expressions ~ torial (Perl) Garbage Collection ~ SaaS Developer (Python, Perl, Ruby, Java, .NET) Anonymous Functions ~ Vinko Vrsalovic (Lisp, Python) Arithmetic Operators ~ Jeremy Ross (Python, Perl, Ruby, Java, C#, Visual Basic, C, C++, Pascal, Smalltalk, etc.) Exception Handling ~ torial (Python, Java, .NET) Pass By Reference ~ Chris (Python) Unified String Format WalloWizard (C#) Generics ~ torial (Python, Java, C#) Integrated Query Equivalent to LINQ ~ Vyrotek (C#) Namespacing ~ Garry Shutler () Short Circuit Logic ~ Adam Bellaire ()

    Read the article

  • Setcookie > sniff > output on same page

    - by lokust
    Hi, I wonder if someone can help shed some light on this: I drop a cookie if a user arrives to the site with a specific key/value in query string. i.e.: http://www.somesite.com?key=hmm01 The cookie code exists at top of the template before <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML ) : <?php header("Content-Type: text/html; charset=utf-8"); ob_start(); if (isset($_GET['key'])) { setcookie("cookname", $_GET['key'], time()+2592000); /* Expires in a month */ } ob_end_flush(); ?> On the same page though within the : I have the following php code that sniffs the cookie and outputs some text: ` switch ($cookievalue) { case hmm01: echo "abc"; break; case hmm02: echo "def"; break; case hmm03: echo "ghi"; break; default: echo "hello"; } ?` -- Problem is when the user first arrives the sniffer script doesn't detect the cookie and outputs the default text: hello Only when user refreshes page or navigates to a different page does the sniffer detect the cookie. Any ideas on how to drop the cookie and output the correct text without a page refresh? Many thanks.

    Read the article

  • iPhone SDK: loading UITableView from SQLite

    - by leon
    Hi, I think I got a good handle on UITableViews and on getting/inserting data from/to SQLite db. I am straggling with an architectural question. My application saves 3 values int the database, there can be many/many rows. However would I load them in the table? From all the tutorials I have seen, at one point entire database is loaded in the NSMutableArray or similar object via performing SELECT statement. Then when -(UITableViewCell *) tableView: (UITableView *) tableView cellForRowAtIndexPath: (NSIndexPath *) indexPath called, rows required are dolled out from the previous loaded NSMutableArray (or similar stracture). But what i have have thousands for rows? Why would I pre-load them? Should I just query database each time cellForRowAtIndexPath is called? If so, what would I use as an index? Each row in the table will have an AUTOINCREMENT index, but since some rows may be deleted index will not correspond to rows in the table (in the SQL I may have something like this with row with index 3 missing): 1 Data1 Data1 2 Data2 Data2 4. data3 data3 Thanks

    Read the article

  • cfchart ignores my scalefrom value

    - by Monte Chan
    Hi all, I have the following codes in my page. The style variable holds the custom style. <cfchart chartheight="450" chartwidth="550" gridlines="9" yaxistitle="Score" scalefrom="20" scaleto="100" style="#style#" format="png" > <cfchartseries query="variables.chart_query" type="scatter" seriescolor="##000000" itemcolumn="MyItem" valuecolumn="MyScore"/> </cfchart> Before I begin, please go to http://www.monteandjanicechan.com/chart_good.jpg. This is how I want my report to come up. On the x-axis, there will always be three items as long as at least one of them has values. If an item does not have any values (i.e. 2010), there would not be a marker in the chart. The problem occurs only when only one item has value. Please see http://www.monteandjanicechan.com/chart_bad.jpg. As you can see, 2008 and 2010 do not have any values; y-axis is now scaled from 0 to 100. I have tried setting one of the items (ex. 2008) a value of 0 or something off the chart; it would scale according to this off-the-chart value and the 2009 value. In short, I have to have at least two items with values between 20 and 100 in order for cfchart to scale from 20 to 100. My question is, how can I correct the issue so that cfchart would ALWAYS scale from 20 to 100? I am running CF9. Thanks in advance, Monte

    Read the article

  • MySql Geospatial bug..?

    - by ShaChris23
    This question is for Mysql geospatial-extension experts. The following query doesn't the result that I'm expecting: create database test_db; use test_db; create table test_table (g polygon not null); insert into test_table (g) values (geomfromtext('Polygon((0 5,5 10,7 8,2 3,0 5))')); insert into test_table (g) values (geomfromtext('Polygon((2 3,7 8,9 6,4 1,2 3))')); select X(PointN(ExteriorRing(g),1)), Y(PointN(ExteriorRing(g),1)), X(PointN(ExteriorRing(g),2)), Y(PointN(ExteriorRing(g),2)), X(PointN(ExteriorRing(g),3)), Y(PointN(ExteriorRing(g),3)), X(PointN(ExteriorRing(g),4)), Y(PointN(ExteriorRing(g),4)) from test_table where MBRContains(g,GeomFromText('Point(3 6)')); Basically we are creating 2 Polygons, and we are trying to use MBRContains to determine whether a Point is within either of the two polygons. Surprisingly, it returns both polygons! Point 3,6 should only exist in the first inserted polygon. Note that both polygons are tilted (once you draw the polygons on a piece of paper, you will see) How come MySql returns both polygons? I'm using MySql Community Edition 5.1.

    Read the article

  • Adding Column While Selecting Table in SQl

    - by kmkperumal
    My First Table is ProjectCustomFields CustomFieldId ProjectId CustomFieldName CustomFieldRequired CustomFieldDataType 69 1 User Name 1 0 72 1 City 1 0 74 1 Email 0 0 82 1 Salary 1 2 My Second Table is ProjectCustomFieldValues CustomFieldValueId ProjectId CustomFieldId CustomFieldValue RecordId 35 1 69 kaliya 1 36 1 72 Bangalore 1 37 1 74 [email protected] 1 41 1 69 Yohesh 2 42 1 72 Delhi 2 43 1 74 2 50 1 69 sss 3 51 1 72 Delhi 3 52 1 74 [email protected] 3 57 1 69 Sunil 4 58 1 72 Mumbai 4 59 1 74 [email protected] 4 60 1 82 20000 4 I tried Below Query Select M.CustomFieldName,N.CustomFieldValue,N.RecordId From (Select G.CustomFieldName,H.RecordId From (Select CustomFieldName From ProjectCustomFields Where ProjectId=1) G Cross Join (Select Distinct RecordId From ProjectCustomFieldValues) H) M Left Join (Select CustFiled.CustomFieldName,CustValue.CustomFieldValue,CustValue.RecordId From ProjectCustomFieldValues CustValue Left Join ProjectCustomFields CustFiled On CustValue.CustomFieldId=CustFiled.CustomFieldId Where CustValue.AuctionId=1 ) N On M.CustomFieldName=N.CustomFieldName And M.RecordId=N.RecordId But I got the result below #CustomFieldName# CustomFieldValue RecordId User Name kaliya 1 City Bangalore 1 Email [email protected] 1 Salary NULL **NULL** User Name Yohesh 2 City Delhi 2 Email 2 Salary NULL **NULL** User Name sss 3 City Delhi 3 Email [email protected] 3 Salary NULL **NULL** User Name NULL **NULL** City NULL **NULL** Email NULL **NULL** Salary NULL **NULL** User Name Sunil 4 City Mumbai 4 Email [email protected] 4 Salary 20000 4 But Expected Result is CustomFieldName CustomFieldValue RecordId User Name kaliya 1 City Bangalore 1 Email [email protected] 1 Salary NULL **1** User Name Yohesh 2 City Delhi 2 Email 2 Salary NULL **2** User Name sss 3 City Delhi 3 Email [email protected] 3 Salary NULL **3** User Name Sunil 4 City Mumbai 4 Email [email protected] 4 Salary 20000 4 Please guide me some one,I tried so much but i got null value in recordId,So I need same recordId above one..

    Read the article

  • How can I read a DBF file with incorrectly defined column data types using ADO.NET?

    - by Jason
    I have a several DBF files generated by a third party that I need to be able to query. I am having trouble because all of the column types have been defined as characters, but the data within some of these fields actually contain binary data. If I try to read these fields using an OleDbDataReader as anything other than a string or character array, I get an InvalidCastException thrown, but I need to be able to read them as a binary value or at least cast/convert them after they are read. The columns that actually DO contain text are being returned as expected. For example, the very first column is defined as a character field with a length of 2 bytes, but the field contains a 16-bit integer. I have written the following test code to read the first column and convert it to the appropriate data type, but the value is not coming out right. The first row of the database has a value of 17365 (0x43D5) in the first column. Running the following code, what I end up getting is 17215 (0x433F). I'm pretty sure it has to do with using the ASCII encoding to get the bytes from the string returned by the data reader, but I'm not sure of another way to get the value into the format that I need, other that to write my own DBF reader and bypass ADO.NET altogether which I don't want to do unless I absolutely have to. Any help would be greatly appreciated. byte[] c0; int i0; string con = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\ASTM;Extended Properties=dBASE III;User ID=Admin;Password=;"; using (OleDbConnection c = new OleDbConnection(con)) { c.Open(); OleDbCommand cmd = c.CreateCommand(); cmd.CommandText = "SELECT * FROM astm2007"; OleDbDataReader dr = cmd.ExecuteReader(); while (dr.Read()) { c0 = Encoding.ASCII.GetBytes(dr.GetValue(0).ToString()); i0 = BitConverter.ToInt16(c0, 0); } dr.Dispose(); }

    Read the article

  • Optimize inserts

    - by ikerib
    Hi! I did an importer in VB .Net witch get data from an SQLServer an inserts this data throught ADSL connection in a remote MySQL server. in the first time, it was like 200 records, but now there are more than 500.000 records and it expends like 11hours exporting all the data and that is bad, veryyy bad. I need to optimize my importer, witch now gets the data into a datatable an them i have a function witch with a loop (row to row) inserts the data with a "insert into" query... like this: For Each dr As DataRow In dt.Rows Console.Write(".") Dim sql As String = "INSERT INTO clientes(id,nombrefis,nombrecom,direccion,codpos,municipio_id,telefono,fax,cif)" & _ "VALUES (@id,@nombrefis,@nombrecom,@direccion,@codpos,@municipio_id,@telefono,@fax,@cif)" cmd = New MySqlCommand(sql, cnn) cmd.Parameters.AddWithValue("id", Int32.Parse(dr("ID EMPRESA").ToString)) cmd.Parameters.AddWithValue("nombrefis", dr("NOMEMP")) cmd.Parameters.AddWithValue("nombrecom", dr("EMPRESA")) cmd.Parameters.AddWithValue("direccion", dr("DIRECC")) cmd.Parameters.AddWithValue("codpos", dr("CODPOS")) cmd.Parameters.AddWithValue("municipio_id", Int32.Parse(dr("CODIGO MUNICIPIO")).ToString) cmd.Parameters.AddWithValue("telefono", dr("TELEF")) cmd.Parameters.AddWithValue("fax", dr("FAX")) cmd.Parameters.AddWithValue("cif", dr("CIF")) cmd.ExecuteNonQuery() Next any ideas or advices? thanks so much

    Read the article

  • More than 100 connection to sql server 2008 in "sleeping" status - Solved

    - by Allende
    I have a big trouble here, well at my server. I have an ASP .net web (framework 4.x) running on muy server, all the transactions/select/update/insert are made with ADO.NET. Well my problem is that after being using for a while (a couple of updates/selects/inserts) sometimes I got more than 100 connections on "sleeping" status when check for the connections on sql server with this query: SELECT spid, a.status, hostname, program_name, cmd, cpu, physical_io, blocked, b.name, loginame FROM master.dbo.sysprocesses a INNER JOIN master.dbo.sysdatabases b ON a.dbid = b.dbid where program_name like '%TMS%' ORDER BY spid I've been checking my code and closing every time I make a connection, I'm gonna test the new class, but I'm afraid the problem doesn't be fixed. It suppose that the connection pooling, keep the connections to re-use them, but until I see don't re-use them always. Any idea besides check for close all the connections open after use them? SOLVED(now I have just one and beautiful connection on "sleeping" status): Besides the anwser of David Stratton, I would like to share this link that help explain really well how the connection pool it works: http://dinesql.blogspot.com/2010/07/sql-server-sleeping-status-and.html Just to be short, you need to close every connection (sql connection objects) in order that the connection pool can re-use the connection and use the same connectinos string, to ensure this is highly recommended use one of the webConfig. Be careful with dataReaders you sould close its connection to (that was what make got out of my mind for while).

    Read the article

  • Core Data Relationship problem

    - by awattar
    I have a very simple model with two objects: Name and Category. One Name can be in many Categories (it's one way relationship). I'm trying to create 8 Categories every with 8 Names. Example code: NSMutableArray *localArray = [NSMutableArray arrayWithObjects: [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g1", @"Name", @"g1", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g2", @"Name", @"g2", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g3", @"Name", @"g3", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g4", @"Name", @"g4", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g5", @"Name", @"g5", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g6", @"Name", @"g6", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g7", @"Name", @"g7", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g8", @"Name", @"g8", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], nil]; NSMutableArray *localArray2 = [NSMutableArray arrayWithObjects: [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test1", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test2", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test3", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test4", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test5", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test6", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test7", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test8", @"Name", nil], nil]; NSError *error; NSManagedObjectContext *moc = [(AppDelegate *)[[UIApplication sharedApplication] delegate] managedObjectContext]; for(NSMutableDictionary *item in localArray) { NSManagedObject *category = [NSEntityDescription insertNewObjectForEntityForName:@"Category" inManagedObjectContext:managedObjectContext]; [category setValue:[item objectForKey:@"Name"] forKey:@"Name"]; [category setValue:[item objectForKey:@"Icon"] forKey:@"Icon"]; [category setValue:[item objectForKey:@"Male"] forKey:@"Male"]; for(NSMutableDictionary *item2 in localArray2) { NSManagedObject *name = [NSEntityDescription insertNewObjectForEntityForName:@"Name" inManagedObjectContext:managedObjectContext]; [name setValue:[item2 objectForKey:@"Name"] forKey:@"Name"]; [[name mutableSetValueForKey:@"CategoryRelationship"] addObject:category]; } } [moc save:&error]; And here's a problem - i've checked that 8 Categories are saved, 64 Names are saved but only 8 from all Names are connected with any category. So when i query for Names in Categories [NSPredicate predicateWithFormat:@"CategoryRelationship.@count != 0"] there are 8 elements and when [NSPredicate predicateWithFormat:@"CategoryRelationship.@count = 0"] there are 56 elements. What is going one here?

    Read the article

  • How to add an image to an SSRS report with a dynamic url?

    - by jrummell
    I'm trying to add an image to a report. The image src url is an IHttpHandler that takes a few query string parameters. Here's an example: <img src="Image.ashx?item=1234567890&lot=asdf&width=50" alt=""/> I added an Image to a cell and then set Source to External and Value to the following expression: ="Image.ashx?item="+Fields!ItemID.Value+"&lot="+Fields!LotID.Value+"&width=50" But when I view the report, it renders the image html as: <IMG SRC="" /> What am I missing? Update Even if I set Value to "image.jpg" it still renders an empty src attribute. I'm not sure if it makes a difference, but I'm using this with a VS 2008 ReportViewer control in Remote processing mode. Update I was able to get the images to display in the Report Designer (VS 2005) with an absolute path (http://server/path/to/http/handler). But they didn't display on the Report Manager website. I even set up an Unattended Execution Account that has access to the external URLs.

    Read the article

  • Corrupted mysql table, cause crash in mysql.h (c++)

    - by Francesco
    i've created a very simple mysql class in c+, but when happen that mysql crash , indexes of tables become corrupted, and all my c++ programs crash too because seems that are unable to recognize corrupted table and allowing me to handle the issue .. Q_RES = mysql_real_query(MY_mysql, tmp_query.c_str(), (unsigned int) tmp_query.size()); if (Q_RES != 0) { if (Q_RES == CR_COMMANDS_OUT_OF_SYNC) cout << "errorquery : CR_COMMANDS_OUT_OF_SYNC " << endl; if (Q_RES == CR_SERVER_GONE_ERROR) cout << "errorquery : CR_SERVER_GONE_ERROR " << endl; if (Q_RES == CR_SERVER_LOST) cout << "errorquery : CR_SERVER_LOST " << endl; LAST_ERROR = mysql_error(MY_mysql); if (n_retrycount < n_retry_limit) { // RETRY! n_retrycount++; sleep(1); cout << "SLEEP - query retry! " << endl; ping(); return select_sql(tmp_query); } return false; } MY_result = mysql_store_result(MY_mysql); B_stored_results = true; cout << "b8" << endl; LAST_affected_rows = (mysql_num_rows(MY_result) + 1); // coult return -1 cout << "b8-1" << endl; the program terminate with a "segmentation fault" after doing the "b8" and before the "b8-1" , Q_RES have no issue even if the table is corrupted.. i would like to know if there is a way to recognize that the table have problems and so then i can run a mysql repair or mysql check .. thanks, Francesco

    Read the article

  • Converting a Linq expression tree that relies on SqlMethods.Like() for use with the Entity Framework

    - by JohnnyO
    I recently switched from using Linq to Sql to the Entity Framework. One of the things that I've been really struggling with is getting a general purpose IQueryable extension method that was built for Linq to Sql to work with the Entity Framework. This extension method has a dependency on the Like() method of SqlMethods, which is Linq to Sql specific. What I really like about this extension method is that it allows me to dynamically construct a Sql Like statement on any object at runtime, by simply passing in a property name (as string) and a query clause (also as string). Such an extension method is very convenient for using grids like flexigrid or jqgrid. Here is the Linq to Sql version (taken from this tutorial: http://www.codeproject.com/KB/aspnet/MVCFlexigrid.aspx): public static IQueryable<T> Like<T>(this IQueryable<T> source, string propertyName, string keyword) { var type = typeof(T); var property = type.GetProperty(propertyName); var parameter = Expression.Parameter(type, "p"); var propertyAccess = Expression.MakeMemberAccess(parameter, property); var constant = Expression.Constant("%" + keyword + "%"); var like = typeof(SqlMethods).GetMethod("Like", new Type[] { typeof(string), typeof(string) }); MethodCallExpression methodExp = Expression.Call(null, like, propertyAccess, constant); Expression<Func<T, bool>> lambda = Expression.Lambda<Func<T, bool>>(methodExp, parameter); return source.Where(lambda); } With this extension method, I can simply do the following: someList.Like("FirstName", "mike"); or anotherList.Like("ProductName", "widget"); Is there an equivalent way to do this with Entity Framework? Thanks in advance.

    Read the article

  • twitter bootstrap typeahead (method 'toLowerCase' of undefined)

    - by mmoscosa
    I am trying to use twitter bootstrap to get the manufacturers from my DB. Because twitter bootstrap typeahead does not support ajax calls I am using this fork: https://gist.github.com/1866577 In that page there is this comment that mentions how to do exactly what I want to do. The problem is when I run my code I keep on getting: Uncaught TypeError: Cannot call method 'toLowerCase' of undefined I googled around and came tried changing my jquery file to both using the minified and non minified as well as the one hosted on google code and I kept getting the same error. My code currently is as follows: $('#manufacturer').typeahead({ source: function(typeahead, query){ $.ajax({ url: window.location.origin+"/bows/get_manufacturers.json", type: "POST", data: "", dataType: "JSON", async: false, success: function(results){ var manufacturers = new Array; $.map(results.data.manufacturers, function(data, item){ var group; group = { manufacturer_id: data.Manufacturer.id, manufacturer: data.Manufacturer.manufacturer }; manufacturers.push(group); }); typeahead.process(manufacturers); } }); }, property: 'name', items:11, onselect: function (obj) { } }); on the url field I added the window.location.origin to avoid any problems as already discussed on another question Also before I was using $.each() and then decided to use $.map() as recomended Tomislav Markovski in a similar question Anyone has any idea why I keep getting this problem?! Thank you

    Read the article

  • JS encodeURIComponent result different from the one created by FORM

    - by Marco Demaio
    I thought values entered in forms are properly encoded by browsers. But this simple test shows it's not true: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html><head> <meta http-equiv="Content-Type" content="text/html; charset=windows-1252"> <title></title> </head><body> <form id="test" action="test_get_vs_encodeuri.html" method="GET" onsubmit="alert(encodeURIComponent(this.one.value));"> <input name="one" type="text" value="Euro-€"> <input type="submit" value="SUBMIT"> </form> </body></html> When hitting submit button: encodeURICompenent encodes input value into "Euro-%E2%82%AC" while browser into the GET query writes only a simple "Euro-%80" Could somone explain? Or is encodeURIComponent doing unnecessary conversions?

    Read the article

  • does php mysql_fetch_array works with html input box?

    - by dexter
    this is my entire PHP code: <?php if(empty($_POST['selid'])) {echo "no value selected"; } else { $con = mysql_connect("localhost","root",""); if(mysql_select_db("cdcol", $con)) { $sql= "SELECT * FROM products where Id = '$_POST[selid]'"; if($result=mysql_query($sql)) { echo "<form name=\"updaterow\" method=\"post\" action=\"dbtest.php\">"; while($row = mysql_fetch_array($result)) { echo "Id :<input type=\"text\" name=\"ppId\" value=".$row['Id']." READONLY></input></br>"; echo "Name :<input type=\"text\" name=\"pName\" value=".$row['Name']."></input></br>"; echo "Description :<input type=\"text\" name=\"pDesc\" value=".$row['Description']."></input></br>"; echo "Unit Price :<input type=\"text\" name=\"pUP\" value=".$row['UnitPrice']."></input></br>"; echo "<input type=\"hidden\" name=\"mode\" value=\"Update\"/>"; } echo "<input type=\"submit\" value=\"Update\">"; echo "</form>"; } else {echo "Query ERROR";} } } ?> PROBLEM here is, ....if the value i am getting from database using mysql_fetch_array($result) is like:(say Description is:) "my product" then; in input box it shows only "my" the word(or digit) after "SPACE"(ie blank space) doesn't get displayed? can input box like above can display the data with two or more words(separated by blank spaces)?

    Read the article

  • Building a many-to-many db schema using only an unpredictable number of foreign keys

    - by user1449855
    Good afternoon (at least around here), I have a many-to-many relationship schema that I'm having trouble building. The main problem is that I'm only working with primary and foreign keys (no varchars or enums to simplify things) and the number of many-to-many relationships is not predictable and can increase at any time. I looked around at various questions and couldn't find something that directly addressed this issue. I split the problem in half, so I now have two one-to-many schemas. One is solved but the other is giving me fits. Let's assume table FOO is a standard, boring table that has a simple primary key. It's the one in the one-to-many relationship. Table BAR can relate to multiple keys of FOO. The number of related keys is not known beforehand. An example: From a query FOO returns ids 3, 4, 5. BAR needs a unique key that relates to 3, 4, 5 (though there could be any number of ids returned) The usual join table does not work: Table FOO_BAR primary_key | foo_id | bar_id | Since FOO returns 3 unique keys and here bar_id has a one-to-one relationship with foo_id. Having two join tables does not seem to work either, as it still can't map foo_ids 3, 4, 5 to a single bar_id. Table FOO_TO_BAR primary_key | foo_id | bar_to_foo_id | Table BAR_TO_FOO primary_key | foo_to_bar_id | bar_id | What am I doing wrong? Am I making things more complicated than they are? How should I approach the problem? Thanks a lot for the help.

    Read the article

  • What's the best way to develop a debugging window for an ajax ASP.Net MVC application

    - by KallDrexx
    While developing my ASP.NET MVC, I have started to see the need for a debugging console window to assist in figuring out what is going right and wrong in my code. I read the last few chapters of the Pro Asp.net MVC book, and the author details how to use http modules to show page load/creation times and linq to sql query logs, both of which I definitely want to be able to see. However, since I am loading a lot of small sections of my page individually with ajax I don't want the debug information right there in the middle of my screen. So the idea I came up with was to have a separate browser window (open-able by a link or some javascript) with a console log, that can contain logged entries both from javascript and from the asp.net mvc run. The former should be relatively easy, but I'm having trouble coming up with a way to log the asp.net information in ajax requests. The direction I have been thinking of going is to create an httpmodule (like the Pro MVC book does), and have that module contain some that append the javascript's log to console calls with the messages. The issue I see with this is finding a way to get the log messages from the controller's action methods to the httpmodule's methods. The only way I see to do this is with a singleton, but I'm not sure if singletons are bad practice for a stateless web application. Furthermore, it seems like if I return json with my ajax calls (instead of pure html) then that won't work at all anyways and unless there is a way to add data to an existing json structure inside the httpmodule. How does everyone else handle this type of debugging in heavily ajax applications? For reference, the javascript library I am using is jquery.

    Read the article

  • Rails 3 : create two dimensional hash and add values from a loop

    - by John
    I have two models : class Project < ActiveRecord::Base has_many :ticket attr_accessible .... end class Ticket < ActiveRecord::Base belongs_to :project attr_accessible done_date, description, .... end In my ProjectsController I would like to create a two dimensional hash to get in one variable for one project all tickets that are done (with done_date as key and description as value). For example i would like a hash like this : What i'm looking for : @tickets_of_project = ["done_date_1" => ["a", "b", "c"], "done_date_2" => ["d", "e"]] And what i'm currently trying (in ProjectsController) ... def show # Get current project @project = Project.find(params[:id]) # Get all dones tickets for a project, order by done_date @tickets = Ticket.where(:project_id => params[:id]).where("done_date IS NOT NULL").order(:done_date) # Create a new hash @tickets_of_project = Hash.new {} # Make a loop on all tickets, and want to complete my hash @tickets.each do |ticket| # TO DO #HOW TO PUT ticket.value IN "tickets_of_project" WITH KEY = ticket.done_date ??** end end I don't know if i'm in a right way or not (maybe use .map instead of make a where query), but how can I complete and put values in hash by checking index if already exist or not ? Thanx :)

    Read the article

  • What is the correct way to implement a massive hierarchical, geographical search for news?

    - by Philip Brocoum
    The company I work for is in the business of sending press releases. We want to make it possible for interested parties to search for press releases based on a number of criteria, the most important being location. For example, someone might search for all news sent to New York City, Massachusetts, or ZIP code 89134, sent from a governmental institution, under the topic of "traffic". Or whatever. The problem is, we've sent, literally, hundreds of thousands of press releases. Searching is slow and complex. For example, a press release sent to Queens, NY should show up in the search I mentioned above even though it wasn't specifically sent to New York City, because Queens is a subset of New York City. We may also want to implement "and" and "or" and negation and text search to the query to create complex searches. These searches also have to be fast enough to function as dynamic RSS feeds. I really don't know anything about search theory, or how it's properly done. The way we are getting by right now is using a data mart to store the locations the releases were sent to in a single table. However, because of the subset thing mentioned above, the data mart is gigantic with millions of rows. And we haven't even implemented cities yet, and there are about 50,000 cities in the United States, which will exponentially increase the size of the data mart by so much I'm afraid it just won't work anymore. Anyway, I realize this is not a simple question and there won't be a "do this" answer. However, I'm hoping one of you can point me in the right direction where I can learn about how massive searches are done? Because I really know nothing about it. And such a search engine is turning out to be incredibly difficult to make. Thanks! I know there must be a way because if Google can search the entire internet we must be able to search our own database :-)

    Read the article

  • Which non-clustered index should I use?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. CREATE TABLE [dbo].[Customers]( [CustomerId] [int] IDENTITY(1,1) NOT NULL, [CustomerName] [varchar](100) NOT NULL, [Deleted] [bit] NOT NULL, [Active] [bit] NOT NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ( [CustomerId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] This is the query I'll be using to see what execution plan is showing: SELECT CustomerName FROM Customers Well, executing this command with no additional non-clustered index, it leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 Now I'm trying to see if it's possible to improve performance, so I've created a non-clustered index for this table: 1) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO Executing again the select against Customers table, the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 It seems better. Now I've deleted this just created non-clustered index, in order to create a new one: 2) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerIDIncludeCustomerName] ON [dbo].[Customers] ( [CustomerId] ASC ) INCLUDE ( [CustomerName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this new non-clustered index, I've executed the select statement again and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 So, which non-clustered index should I use? Why the costs are the same on execution plan for I/O and Operator? Am I doing something wrong or this is expected? thank you

    Read the article

  • Script working with mysql and php into a textarea and back

    - by Tribalcomm
    I am trying to write a custom script that will keep a list of strings in a textarea. Each line of the textarea will be a row from a table. The problem I have is how to work the script to allow for adding, updating, or deleting rows based on a submit. So, for instance, I currently have 3 rows in the database: john sue mark I want to be able to delete sue and add richard and it will delete the row with sue and insert a row for richard. My code so far is as follows: To query the db and list it in the textarea: $basearray = mysql_query("SELECT name FROM mytable ORDER BY name"); <textarea name="names" cols=6 rows=12>'); <?php foreach($basearray as $base){ echo $base->name."\n"; } ?> </textarea> After the submit, I have: <?php $namelist = $_REQUEST[names]; $newarray = explode("\n", $namelist); foreach($newarray as $name) { if (!in_array($name, $basearray)) { mysql_query(DELETE FROM mytable WHERE word='$name'"); } elseif (in_array($name, $basearray)) { ; } else { mysql_query("INSERT INTO mytable (name) VALUES ("$name")"); } } ?> Please tell me what I am doing wrong. I am not getting any functions to work when I edit the contents of the textarea. Thanks!

    Read the article

  • Django LFS - custom views

    - by owca
    For all those ligthning fast shop users. I'm trying to implement my own first page view that will list all products from shop ( under '/' address). So I have a template : {% extends "lfs/shop/shop_base.html" %} {% block content %} <div id="najnowsze_produkty"> <ul> {% for obj in objects %} <li> {{ obj.name }} </li> {% endfor %} </ul> </div> {% endblock %} and then I've edited main shop view : from lfs.catalog.models import Category from lfs.catalog.models import Product def shop_view(request, template_name="lfs/shop/shop.html"): products = Product.objects.all() shop = lfs_get_object_or_404(Shop, pk=1) return render_to_response(template_name, RequestContext(request, { "shop" : shop, "products" : products })) but it just shows nothing. When I do Product.objects.all() query in shell I get results. Any ideas what could cause the problem ? Maybe I should filter products with 'active' status only ? But I'm not sure if it can influence all objects in any way.

    Read the article

  • Rails : fighting long http response times with ajax. Is it a good idea? Please, help with implementa

    - by baranov
    Hi, everybody! I've googled some tutorials, browsed some SO answers, and was unable to find a recipe for my problem. I'm writing a web site which is supposed to display almost realtime stock chart. Data is stored in constantly updating MySQL database, I wrote a find_by_sql query code which fetches all the data I need to get my chart drawn. Everything is ok, except performance - it takes from one second to one minute for different queries to fetch all the data from the database, this time includes necessary (My)SQL-server side calculations. This is simply unacceptable. I got the following idea: if the data is queried from the MySQL server one point a time instead of entire dataset, it takes only about 1-100ms to get an individual point. I imagine the data fetch process might be browser-driven. After the user presses the button in order to get a chart drawn, controller makes one request to the database and renders, say, a progress bar, say 1% ready. When the browser gets the response, it immediately makes an (ajax) request, and the server fetches the next piece of data and renders "2%". And so on, until all the data is ready and the server displays the requested chart. Could this be implemented in rails+js, is there a tutorial for solving a similar problem on the Web? I suppose if the thing is feasible at all, somebody should have already done this before. I have read several articles about ajax, I believe I do understand general principles, but never did nontrivial ajax programming myself. Thanks for your time!

    Read the article

  • Telephone Number to Geolocation UK

    - by David Toy
    Is there a service that provides latitude and longitude for UK phone numbers? For example: Query: 0141 574 xxx, Returns: (55.8659829, -4.2602205) [Glasgow City Centre] Allow me to stress that I am not looking for a reverse-directory-enquires. I am more interested in 'local area' for things like weather by phone or "Where's my nearest Pizza Shop?" If this service doesn't exist your suggestions on how to implement it or where to get data from would also be incredibly useful. I am aware that Ofcom provides a list of area codes with a place name [1] suitable for geolocation, but I have my concerns about resolution. I see this as a particular problem in smaller towns and rural areas where an area code will cover a large geographical area. Second Example: Area Code: 01555, Ofcom: Lanark However: 01555 860xxx is Crossford (4 miles W of Lanark) 01555 77xxxx is Carluke (5 miles NW) 01555 89xxxx is Lesmahagow (5 miles SW) 01555 840xxx is Carnwath (7 miles NE) Therefore 01555 covers about ~80 sq miles. That's not particularly local. [1] Ofcom Area Code Tool: http://www.ofcom.org.uk/consumer/2009/09/telephone-area-codes-tool/

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >