Search Results

Search found 17593 results on 704 pages for 'wmi query'.

Page 626/704 | < Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >

  • How can I read a DBF file with incorrectly defined column data types using ADO.NET?

    - by Jason
    I have a several DBF files generated by a third party that I need to be able to query. I am having trouble because all of the column types have been defined as characters, but the data within some of these fields actually contain binary data. If I try to read these fields using an OleDbDataReader as anything other than a string or character array, I get an InvalidCastException thrown, but I need to be able to read them as a binary value or at least cast/convert them after they are read. The columns that actually DO contain text are being returned as expected. For example, the very first column is defined as a character field with a length of 2 bytes, but the field contains a 16-bit integer. I have written the following test code to read the first column and convert it to the appropriate data type, but the value is not coming out right. The first row of the database has a value of 17365 (0x43D5) in the first column. Running the following code, what I end up getting is 17215 (0x433F). I'm pretty sure it has to do with using the ASCII encoding to get the bytes from the string returned by the data reader, but I'm not sure of another way to get the value into the format that I need, other that to write my own DBF reader and bypass ADO.NET altogether which I don't want to do unless I absolutely have to. Any help would be greatly appreciated. byte[] c0; int i0; string con = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\ASTM;Extended Properties=dBASE III;User ID=Admin;Password=;"; using (OleDbConnection c = new OleDbConnection(con)) { c.Open(); OleDbCommand cmd = c.CreateCommand(); cmd.CommandText = "SELECT * FROM astm2007"; OleDbDataReader dr = cmd.ExecuteReader(); while (dr.Read()) { c0 = Encoding.ASCII.GetBytes(dr.GetValue(0).ToString()); i0 = BitConverter.ToInt16(c0, 0); } dr.Dispose(); }

    Read the article

  • More than 100 connection to sql server 2008 in "sleeping" status - Solved

    - by Allende
    I have a big trouble here, well at my server. I have an ASP .net web (framework 4.x) running on muy server, all the transactions/select/update/insert are made with ADO.NET. Well my problem is that after being using for a while (a couple of updates/selects/inserts) sometimes I got more than 100 connections on "sleeping" status when check for the connections on sql server with this query: SELECT spid, a.status, hostname, program_name, cmd, cpu, physical_io, blocked, b.name, loginame FROM master.dbo.sysprocesses a INNER JOIN master.dbo.sysdatabases b ON a.dbid = b.dbid where program_name like '%TMS%' ORDER BY spid I've been checking my code and closing every time I make a connection, I'm gonna test the new class, but I'm afraid the problem doesn't be fixed. It suppose that the connection pooling, keep the connections to re-use them, but until I see don't re-use them always. Any idea besides check for close all the connections open after use them? SOLVED(now I have just one and beautiful connection on "sleeping" status): Besides the anwser of David Stratton, I would like to share this link that help explain really well how the connection pool it works: http://dinesql.blogspot.com/2010/07/sql-server-sleeping-status-and.html Just to be short, you need to close every connection (sql connection objects) in order that the connection pool can re-use the connection and use the same connectinos string, to ensure this is highly recommended use one of the webConfig. Be careful with dataReaders you sould close its connection to (that was what make got out of my mind for while).

    Read the article

  • Core Data Relationship problem

    - by awattar
    I have a very simple model with two objects: Name and Category. One Name can be in many Categories (it's one way relationship). I'm trying to create 8 Categories every with 8 Names. Example code: NSMutableArray *localArray = [NSMutableArray arrayWithObjects: [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g1", @"Name", @"g1", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g2", @"Name", @"g2", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g3", @"Name", @"g3", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g4", @"Name", @"g4", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g5", @"Name", @"g5", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g6", @"Name", @"g6", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g7", @"Name", @"g7", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"g8", @"Name", @"g8", @"Icon", [NSNumber numberWithBool:YES] , @"Male", nil], nil]; NSMutableArray *localArray2 = [NSMutableArray arrayWithObjects: [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test1", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test2", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test3", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test4", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test5", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test6", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test7", @"Name", nil], [NSMutableDictionary dictionaryWithObjectsAndKeys: @"Test8", @"Name", nil], nil]; NSError *error; NSManagedObjectContext *moc = [(AppDelegate *)[[UIApplication sharedApplication] delegate] managedObjectContext]; for(NSMutableDictionary *item in localArray) { NSManagedObject *category = [NSEntityDescription insertNewObjectForEntityForName:@"Category" inManagedObjectContext:managedObjectContext]; [category setValue:[item objectForKey:@"Name"] forKey:@"Name"]; [category setValue:[item objectForKey:@"Icon"] forKey:@"Icon"]; [category setValue:[item objectForKey:@"Male"] forKey:@"Male"]; for(NSMutableDictionary *item2 in localArray2) { NSManagedObject *name = [NSEntityDescription insertNewObjectForEntityForName:@"Name" inManagedObjectContext:managedObjectContext]; [name setValue:[item2 objectForKey:@"Name"] forKey:@"Name"]; [[name mutableSetValueForKey:@"CategoryRelationship"] addObject:category]; } } [moc save:&error]; And here's a problem - i've checked that 8 Categories are saved, 64 Names are saved but only 8 from all Names are connected with any category. So when i query for Names in Categories [NSPredicate predicateWithFormat:@"CategoryRelationship.@count != 0"] there are 8 elements and when [NSPredicate predicateWithFormat:@"CategoryRelationship.@count = 0"] there are 56 elements. What is going one here?

    Read the article

  • How to add an image to an SSRS report with a dynamic url?

    - by jrummell
    I'm trying to add an image to a report. The image src url is an IHttpHandler that takes a few query string parameters. Here's an example: <img src="Image.ashx?item=1234567890&lot=asdf&width=50" alt=""/> I added an Image to a cell and then set Source to External and Value to the following expression: ="Image.ashx?item="+Fields!ItemID.Value+"&lot="+Fields!LotID.Value+"&width=50" But when I view the report, it renders the image html as: <IMG SRC="" /> What am I missing? Update Even if I set Value to "image.jpg" it still renders an empty src attribute. I'm not sure if it makes a difference, but I'm using this with a VS 2008 ReportViewer control in Remote processing mode. Update I was able to get the images to display in the Report Designer (VS 2005) with an absolute path (http://server/path/to/http/handler). But they didn't display on the Report Manager website. I even set up an Unattended Execution Account that has access to the external URLs.

    Read the article

  • Corrupted mysql table, cause crash in mysql.h (c++)

    - by Francesco
    i've created a very simple mysql class in c+, but when happen that mysql crash , indexes of tables become corrupted, and all my c++ programs crash too because seems that are unable to recognize corrupted table and allowing me to handle the issue .. Q_RES = mysql_real_query(MY_mysql, tmp_query.c_str(), (unsigned int) tmp_query.size()); if (Q_RES != 0) { if (Q_RES == CR_COMMANDS_OUT_OF_SYNC) cout << "errorquery : CR_COMMANDS_OUT_OF_SYNC " << endl; if (Q_RES == CR_SERVER_GONE_ERROR) cout << "errorquery : CR_SERVER_GONE_ERROR " << endl; if (Q_RES == CR_SERVER_LOST) cout << "errorquery : CR_SERVER_LOST " << endl; LAST_ERROR = mysql_error(MY_mysql); if (n_retrycount < n_retry_limit) { // RETRY! n_retrycount++; sleep(1); cout << "SLEEP - query retry! " << endl; ping(); return select_sql(tmp_query); } return false; } MY_result = mysql_store_result(MY_mysql); B_stored_results = true; cout << "b8" << endl; LAST_affected_rows = (mysql_num_rows(MY_result) + 1); // coult return -1 cout << "b8-1" << endl; the program terminate with a "segmentation fault" after doing the "b8" and before the "b8-1" , Q_RES have no issue even if the table is corrupted.. i would like to know if there is a way to recognize that the table have problems and so then i can run a mysql repair or mysql check .. thanks, Francesco

    Read the article

  • does php mysql_fetch_array works with html input box?

    - by dexter
    this is my entire PHP code: <?php if(empty($_POST['selid'])) {echo "no value selected"; } else { $con = mysql_connect("localhost","root",""); if(mysql_select_db("cdcol", $con)) { $sql= "SELECT * FROM products where Id = '$_POST[selid]'"; if($result=mysql_query($sql)) { echo "<form name=\"updaterow\" method=\"post\" action=\"dbtest.php\">"; while($row = mysql_fetch_array($result)) { echo "Id :<input type=\"text\" name=\"ppId\" value=".$row['Id']." READONLY></input></br>"; echo "Name :<input type=\"text\" name=\"pName\" value=".$row['Name']."></input></br>"; echo "Description :<input type=\"text\" name=\"pDesc\" value=".$row['Description']."></input></br>"; echo "Unit Price :<input type=\"text\" name=\"pUP\" value=".$row['UnitPrice']."></input></br>"; echo "<input type=\"hidden\" name=\"mode\" value=\"Update\"/>"; } echo "<input type=\"submit\" value=\"Update\">"; echo "</form>"; } else {echo "Query ERROR";} } } ?> PROBLEM here is, ....if the value i am getting from database using mysql_fetch_array($result) is like:(say Description is:) "my product" then; in input box it shows only "my" the word(or digit) after "SPACE"(ie blank space) doesn't get displayed? can input box like above can display the data with two or more words(separated by blank spaces)?

    Read the article

  • twitter bootstrap typeahead (method 'toLowerCase' of undefined)

    - by mmoscosa
    I am trying to use twitter bootstrap to get the manufacturers from my DB. Because twitter bootstrap typeahead does not support ajax calls I am using this fork: https://gist.github.com/1866577 In that page there is this comment that mentions how to do exactly what I want to do. The problem is when I run my code I keep on getting: Uncaught TypeError: Cannot call method 'toLowerCase' of undefined I googled around and came tried changing my jquery file to both using the minified and non minified as well as the one hosted on google code and I kept getting the same error. My code currently is as follows: $('#manufacturer').typeahead({ source: function(typeahead, query){ $.ajax({ url: window.location.origin+"/bows/get_manufacturers.json", type: "POST", data: "", dataType: "JSON", async: false, success: function(results){ var manufacturers = new Array; $.map(results.data.manufacturers, function(data, item){ var group; group = { manufacturer_id: data.Manufacturer.id, manufacturer: data.Manufacturer.manufacturer }; manufacturers.push(group); }); typeahead.process(manufacturers); } }); }, property: 'name', items:11, onselect: function (obj) { } }); on the url field I added the window.location.origin to avoid any problems as already discussed on another question Also before I was using $.each() and then decided to use $.map() as recomended Tomislav Markovski in a similar question Anyone has any idea why I keep getting this problem?! Thank you

    Read the article

  • JS encodeURIComponent result different from the one created by FORM

    - by Marco Demaio
    I thought values entered in forms are properly encoded by browsers. But this simple test shows it's not true: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html><head> <meta http-equiv="Content-Type" content="text/html; charset=windows-1252"> <title></title> </head><body> <form id="test" action="test_get_vs_encodeuri.html" method="GET" onsubmit="alert(encodeURIComponent(this.one.value));"> <input name="one" type="text" value="Euro-€"> <input type="submit" value="SUBMIT"> </form> </body></html> When hitting submit button: encodeURICompenent encodes input value into "Euro-%E2%82%AC" while browser into the GET query writes only a simple "Euro-%80" Could somone explain? Or is encodeURIComponent doing unnecessary conversions?

    Read the article

  • What's the best way to develop a debugging window for an ajax ASP.Net MVC application

    - by KallDrexx
    While developing my ASP.NET MVC, I have started to see the need for a debugging console window to assist in figuring out what is going right and wrong in my code. I read the last few chapters of the Pro Asp.net MVC book, and the author details how to use http modules to show page load/creation times and linq to sql query logs, both of which I definitely want to be able to see. However, since I am loading a lot of small sections of my page individually with ajax I don't want the debug information right there in the middle of my screen. So the idea I came up with was to have a separate browser window (open-able by a link or some javascript) with a console log, that can contain logged entries both from javascript and from the asp.net mvc run. The former should be relatively easy, but I'm having trouble coming up with a way to log the asp.net information in ajax requests. The direction I have been thinking of going is to create an httpmodule (like the Pro MVC book does), and have that module contain some that append the javascript's log to console calls with the messages. The issue I see with this is finding a way to get the log messages from the controller's action methods to the httpmodule's methods. The only way I see to do this is with a singleton, but I'm not sure if singletons are bad practice for a stateless web application. Furthermore, it seems like if I return json with my ajax calls (instead of pure html) then that won't work at all anyways and unless there is a way to add data to an existing json structure inside the httpmodule. How does everyone else handle this type of debugging in heavily ajax applications? For reference, the javascript library I am using is jquery.

    Read the article

  • Building a many-to-many db schema using only an unpredictable number of foreign keys

    - by user1449855
    Good afternoon (at least around here), I have a many-to-many relationship schema that I'm having trouble building. The main problem is that I'm only working with primary and foreign keys (no varchars or enums to simplify things) and the number of many-to-many relationships is not predictable and can increase at any time. I looked around at various questions and couldn't find something that directly addressed this issue. I split the problem in half, so I now have two one-to-many schemas. One is solved but the other is giving me fits. Let's assume table FOO is a standard, boring table that has a simple primary key. It's the one in the one-to-many relationship. Table BAR can relate to multiple keys of FOO. The number of related keys is not known beforehand. An example: From a query FOO returns ids 3, 4, 5. BAR needs a unique key that relates to 3, 4, 5 (though there could be any number of ids returned) The usual join table does not work: Table FOO_BAR primary_key | foo_id | bar_id | Since FOO returns 3 unique keys and here bar_id has a one-to-one relationship with foo_id. Having two join tables does not seem to work either, as it still can't map foo_ids 3, 4, 5 to a single bar_id. Table FOO_TO_BAR primary_key | foo_id | bar_to_foo_id | Table BAR_TO_FOO primary_key | foo_to_bar_id | bar_id | What am I doing wrong? Am I making things more complicated than they are? How should I approach the problem? Thanks a lot for the help.

    Read the article

  • Converting a Linq expression tree that relies on SqlMethods.Like() for use with the Entity Framework

    - by JohnnyO
    I recently switched from using Linq to Sql to the Entity Framework. One of the things that I've been really struggling with is getting a general purpose IQueryable extension method that was built for Linq to Sql to work with the Entity Framework. This extension method has a dependency on the Like() method of SqlMethods, which is Linq to Sql specific. What I really like about this extension method is that it allows me to dynamically construct a Sql Like statement on any object at runtime, by simply passing in a property name (as string) and a query clause (also as string). Such an extension method is very convenient for using grids like flexigrid or jqgrid. Here is the Linq to Sql version (taken from this tutorial: http://www.codeproject.com/KB/aspnet/MVCFlexigrid.aspx): public static IQueryable<T> Like<T>(this IQueryable<T> source, string propertyName, string keyword) { var type = typeof(T); var property = type.GetProperty(propertyName); var parameter = Expression.Parameter(type, "p"); var propertyAccess = Expression.MakeMemberAccess(parameter, property); var constant = Expression.Constant("%" + keyword + "%"); var like = typeof(SqlMethods).GetMethod("Like", new Type[] { typeof(string), typeof(string) }); MethodCallExpression methodExp = Expression.Call(null, like, propertyAccess, constant); Expression<Func<T, bool>> lambda = Expression.Lambda<Func<T, bool>>(methodExp, parameter); return source.Where(lambda); } With this extension method, I can simply do the following: someList.Like("FirstName", "mike"); or anotherList.Like("ProductName", "widget"); Is there an equivalent way to do this with Entity Framework? Thanks in advance.

    Read the article

  • What is the correct way to implement a massive hierarchical, geographical search for news?

    - by Philip Brocoum
    The company I work for is in the business of sending press releases. We want to make it possible for interested parties to search for press releases based on a number of criteria, the most important being location. For example, someone might search for all news sent to New York City, Massachusetts, or ZIP code 89134, sent from a governmental institution, under the topic of "traffic". Or whatever. The problem is, we've sent, literally, hundreds of thousands of press releases. Searching is slow and complex. For example, a press release sent to Queens, NY should show up in the search I mentioned above even though it wasn't specifically sent to New York City, because Queens is a subset of New York City. We may also want to implement "and" and "or" and negation and text search to the query to create complex searches. These searches also have to be fast enough to function as dynamic RSS feeds. I really don't know anything about search theory, or how it's properly done. The way we are getting by right now is using a data mart to store the locations the releases were sent to in a single table. However, because of the subset thing mentioned above, the data mart is gigantic with millions of rows. And we haven't even implemented cities yet, and there are about 50,000 cities in the United States, which will exponentially increase the size of the data mart by so much I'm afraid it just won't work anymore. Anyway, I realize this is not a simple question and there won't be a "do this" answer. However, I'm hoping one of you can point me in the right direction where I can learn about how massive searches are done? Because I really know nothing about it. And such a search engine is turning out to be incredibly difficult to make. Thanks! I know there must be a way because if Google can search the entire internet we must be able to search our own database :-)

    Read the article

  • Which non-clustered index should I use?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. CREATE TABLE [dbo].[Customers]( [CustomerId] [int] IDENTITY(1,1) NOT NULL, [CustomerName] [varchar](100) NOT NULL, [Deleted] [bit] NOT NULL, [Active] [bit] NOT NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ( [CustomerId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] This is the query I'll be using to see what execution plan is showing: SELECT CustomerName FROM Customers Well, executing this command with no additional non-clustered index, it leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 Now I'm trying to see if it's possible to improve performance, so I've created a non-clustered index for this table: 1) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO Executing again the select against Customers table, the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 It seems better. Now I've deleted this just created non-clustered index, in order to create a new one: 2) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerIDIncludeCustomerName] ON [dbo].[Customers] ( [CustomerId] ASC ) INCLUDE ( [CustomerName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this new non-clustered index, I've executed the select statement again and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 So, which non-clustered index should I use? Why the costs are the same on execution plan for I/O and Operator? Am I doing something wrong or this is expected? thank you

    Read the article

  • Rails 3 : create two dimensional hash and add values from a loop

    - by John
    I have two models : class Project < ActiveRecord::Base has_many :ticket attr_accessible .... end class Ticket < ActiveRecord::Base belongs_to :project attr_accessible done_date, description, .... end In my ProjectsController I would like to create a two dimensional hash to get in one variable for one project all tickets that are done (with done_date as key and description as value). For example i would like a hash like this : What i'm looking for : @tickets_of_project = ["done_date_1" => ["a", "b", "c"], "done_date_2" => ["d", "e"]] And what i'm currently trying (in ProjectsController) ... def show # Get current project @project = Project.find(params[:id]) # Get all dones tickets for a project, order by done_date @tickets = Ticket.where(:project_id => params[:id]).where("done_date IS NOT NULL").order(:done_date) # Create a new hash @tickets_of_project = Hash.new {} # Make a loop on all tickets, and want to complete my hash @tickets.each do |ticket| # TO DO #HOW TO PUT ticket.value IN "tickets_of_project" WITH KEY = ticket.done_date ??** end end I don't know if i'm in a right way or not (maybe use .map instead of make a where query), but how can I complete and put values in hash by checking index if already exist or not ? Thanx :)

    Read the article

  • Script working with mysql and php into a textarea and back

    - by Tribalcomm
    I am trying to write a custom script that will keep a list of strings in a textarea. Each line of the textarea will be a row from a table. The problem I have is how to work the script to allow for adding, updating, or deleting rows based on a submit. So, for instance, I currently have 3 rows in the database: john sue mark I want to be able to delete sue and add richard and it will delete the row with sue and insert a row for richard. My code so far is as follows: To query the db and list it in the textarea: $basearray = mysql_query("SELECT name FROM mytable ORDER BY name"); <textarea name="names" cols=6 rows=12>'); <?php foreach($basearray as $base){ echo $base->name."\n"; } ?> </textarea> After the submit, I have: <?php $namelist = $_REQUEST[names]; $newarray = explode("\n", $namelist); foreach($newarray as $name) { if (!in_array($name, $basearray)) { mysql_query(DELETE FROM mytable WHERE word='$name'"); } elseif (in_array($name, $basearray)) { ; } else { mysql_query("INSERT INTO mytable (name) VALUES ("$name")"); } } ?> Please tell me what I am doing wrong. I am not getting any functions to work when I edit the contents of the textarea. Thanks!

    Read the article

  • Using NHibernate to select entities based on activity of children entities

    - by mannish
    I'm having a case of the Mondays... I need to select blog posts based on recent activity in the post's comments collection (a Post has a List<Comment> property and likewise, a Comment has a Post property, establishing the relationship. I don't want to show the same post twice, and I only need a subset of the entities, not all of the posts. First thought was to grab all posts that have comments, then order those based on the most recent comment. For this to work, I'm pretty sure I'd have to limit the comments for each Post to the first/newest Comment. Last I'd simply take the top 5 (or whatever max results number I want to pass into the method). Second thought would be to grab all of the comments, ordered by CreatedOn, and filter so there's only one Comment per Post. Then return those top (whatever) posts. This seems like the same as the first option, just going through the back door. I've got an ugly, two query option I've got working with some LINQ on the side for filtering, but I know there's a more elegant way to do it in using the NHibernate API. Hoping to see some good ideas here.

    Read the article

  • Telephone Number to Geolocation UK

    - by David Toy
    Is there a service that provides latitude and longitude for UK phone numbers? For example: Query: 0141 574 xxx, Returns: (55.8659829, -4.2602205) [Glasgow City Centre] Allow me to stress that I am not looking for a reverse-directory-enquires. I am more interested in 'local area' for things like weather by phone or "Where's my nearest Pizza Shop?" If this service doesn't exist your suggestions on how to implement it or where to get data from would also be incredibly useful. I am aware that Ofcom provides a list of area codes with a place name [1] suitable for geolocation, but I have my concerns about resolution. I see this as a particular problem in smaller towns and rural areas where an area code will cover a large geographical area. Second Example: Area Code: 01555, Ofcom: Lanark However: 01555 860xxx is Crossford (4 miles W of Lanark) 01555 77xxxx is Carluke (5 miles NW) 01555 89xxxx is Lesmahagow (5 miles SW) 01555 840xxx is Carnwath (7 miles NE) Therefore 01555 covers about ~80 sq miles. That's not particularly local. [1] Ofcom Area Code Tool: http://www.ofcom.org.uk/consumer/2009/09/telephone-area-codes-tool/

    Read the article

  • Datastore performance, my code or the datastore latency

    - by fredrik
    I had for the last month a bit of a problem with a quite basic datastore query. It involves 2 db.Models with one referring to the other with a db.ReferenceProperty. The problem is that according to the admin logs the request takes about 2-4 seconds to complete. I strip it down to a bare form and a list to display the results. The put works fine, but the get accumulates (in my opinion) way to much cpu time. #The get look like this: outputData['items'] = {} labelsData = Label.all() for label in labelsData: labelItem = label.item.name if labelItem not in outputData['items']: outputData['items'][labelItem] = { 'item' : labelItem, 'labels' : [] } outputData['items'][labelItem]['labels'].append(label.text) path = os.path.join(os.path.dirname(__file__), 'index.html') self.response.out.write(template.render(path, outputData)) #And the models: class Item(db.Model): name = db.StringProperty() class Label(db.Model): text = db.StringProperty() lang = db.StringProperty() item = db.ReferenceProperty(Item) I've tried to make it a number of different way ie. instead of ReferenceProperty storing all Label keys in the Item Model as a db.ListProperty. My test data is just 10 rows in Item and 40 in Label. So my questions: Is it a fools errand to try to optimize this since the high cpu usage is due to the problems with the datastore or have I just screwed up somewhere in the code? ..fredrik

    Read the article

  • Rails : fighting long http response times with ajax. Is it a good idea? Please, help with implementa

    - by baranov
    Hi, everybody! I've googled some tutorials, browsed some SO answers, and was unable to find a recipe for my problem. I'm writing a web site which is supposed to display almost realtime stock chart. Data is stored in constantly updating MySQL database, I wrote a find_by_sql query code which fetches all the data I need to get my chart drawn. Everything is ok, except performance - it takes from one second to one minute for different queries to fetch all the data from the database, this time includes necessary (My)SQL-server side calculations. This is simply unacceptable. I got the following idea: if the data is queried from the MySQL server one point a time instead of entire dataset, it takes only about 1-100ms to get an individual point. I imagine the data fetch process might be browser-driven. After the user presses the button in order to get a chart drawn, controller makes one request to the database and renders, say, a progress bar, say 1% ready. When the browser gets the response, it immediately makes an (ajax) request, and the server fetches the next piece of data and renders "2%". And so on, until all the data is ready and the server displays the requested chart. Could this be implemented in rails+js, is there a tutorial for solving a similar problem on the Web? I suppose if the thing is feasible at all, somebody should have already done this before. I have read several articles about ajax, I believe I do understand general principles, but never did nontrivial ajax programming myself. Thanks for your time!

    Read the article

  • Optimize an SQL statement

    - by kovshenin
    Hey, I'm running WordPress, the database diagram could be found here: http://codex.wordpress.org/Database_Description After doing tonnes of filters and applying some hooks to the core, I'm left with the following query: SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts JOIN wp_postmeta ppmeta_beds ON (ppmeta_beds.post_id = wp_posts.ID AND ppmeta_beds.meta_key = 'pp-general-beds' AND ppmeta_beds.meta_value >= 2) JOIN wp_postmeta ppmeta_baths ON (ppmeta_baths.post_id = wp_posts.ID AND ppmeta_baths.meta_key = 'pp-general-baths' AND ppmeta_baths.meta_value >= 3) JOIN wp_postmeta ppmeta_furnished ON (ppmeta_furnished.post_id = wp_posts.ID AND ppmeta_furnished.meta_key = 'pp-general-furnished' AND ppmeta_furnished.meta_value = 'yes') JOIN wp_postmeta ppmeta_pool ON (ppmeta_pool.post_id = wp_posts.ID AND ppmeta_pool.meta_key = 'pp-facilities-pool' AND ppmeta_pool.meta_value = 'yes') JOIN wp_postmeta ppmeta_pool_type ON (ppmeta_pool_type.post_id = wp_posts.ID AND ppmeta_pool_type.meta_key = 'pp-facilities-pool-type' AND ppmeta_pool_type.meta_value IN ('tennis', 'voleyball', 'basketball', 'fitness')) JOIN wp_postmeta ppmeta_sport ON (ppmeta_sport.post_id = wp_posts.ID AND ppmeta_sport.meta_key = 'pp-facilities-sport' AND ppmeta_sport.meta_value = 'yes') JOIN wp_postmeta ppmeta_sport_type ON (ppmeta_sport_type.post_id = wp_posts.ID AND ppmeta_sport_type.meta_key = 'pp-facilities-sport-type' AND ppmeta_sport_type.meta_value IN ('tennis', 'voleyball', 'basketball', 'fitness')) JOIN wp_postmeta ppmeta_parking ON (ppmeta_parking.post_id = wp_posts.ID AND ppmeta_parking.meta_key = 'pp-facilities-parking' AND ppmeta_parking.meta_value = 'yes') JOIN wp_postmeta ppmeta_parking_type ON (ppmeta_parking_type.post_id = wp_posts.ID AND ppmeta_parking_type.meta_key = 'pp-facilities-parking-type' AND ppmeta_parking_type.meta_value IN ('street', 'off-street', 'garage')) JOIN wp_postmeta ppmeta_garden ON (ppmeta_garden.post_id = wp_posts.ID AND ppmeta_garden.meta_key = 'pp-facilities-garden' AND ppmeta_garden.meta_value = 'yes') JOIN wp_postmeta ppmeta_garden_type ON (ppmeta_garden_type.post_id = wp_posts.ID AND ppmeta_garden_type.meta_key = 'pp-facilities-garden-type' AND ppmeta_garden_type.meta_value IN ('private', 'communal')) JOIN wp_postmeta ppmeta_type ON (ppmeta_type.post_id = wp_posts.ID AND ppmeta_type.meta_key = 'pp-general-type' AND ppmeta_type.meta_value IN ('villa', 'apartment', 'penthouse')) JOIN wp_postmeta ppmeta_status ON (ppmeta_status.post_id = wp_posts.ID AND ppmeta_status.meta_key = 'pp-general-status' AND ppmeta_status.meta_value IN ('off-plan', 'resale')) JOIN wp_postmeta ppmeta_location_type ON (ppmeta_location_type.post_id = wp_posts.ID AND ppmeta_location_type.meta_key = 'pp-location-type' AND ppmeta_location_type.meta_value IN ('beachfront', 'countryside', 'town-center', 'near-the-sea', 'hillside', 'private-resort')) JOIN wp_postmeta ppmeta_price_range ON (ppmeta_price_range.post_id = wp_posts.ID AND ppmeta_price_range.meta_key = 'pp-general-price' AND ppmeta_price_range.meta_value BETWEEN 10000 AND 50000) JOIN wp_postmeta ppmeta_area_range ON (ppmeta_area_range.post_id = wp_posts.ID AND ppmeta_area_range.meta_key = 'pp-general-area' AND ppmeta_area_range.meta_value BETWEEN 50 AND 150) WHERE 1=1 AND (((wp_posts.post_title LIKE '%fdsfsad%') OR (wp_posts.post_content LIKE '%fdsfsad%'))) AND wp_posts.post_type = 'property' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 10 It's way too big. Could anybody please show me a way of optimizing all those joins into fewer statements? As you can see they all use the same tables but under different names. I'm not an SQL guru but I think there should be a way, because this is insane ;) Thanks! Update Here's what explain returns: http://twitpic.com/1cd36p

    Read the article

  • rapid application developement tools for very basic GUI apps

    - by Jurij
    I know there are many RAD platforms out there. Infact there are so many that I'm having a hard time finding out which one fits me best. What I want is a RAD tool that would allow me to define a database data model (make DB tables) and then create (view and edit) forms for the various tables. Data input, updating and various queries should be easy and GUI should generate automatically. I'd like to add some additional functionality by coding (such as various complex calculations on the data). I'm a programmer so I'm willing to learn to use a more complete, full-blown RAD solution if you can point me to it (NetBeans and RubyOnRails being the two such frameworks that I'd would probably be high on the list). I'm currently doing Windows Forms logistics apps in .NET. I've actually developed a very crude and basic version of what I need, but I just know that there are solutions out there that are much better and I'd benefit by knowing how to use them. So in short, the basic requirements: * database based data storage (SQLite if possible) * very automated GUI creation * desktop based (as in: not a web app) * extendable by coding * used for creating simple data entry, view & query apps. So basically something like Oracle Forms or DotNetMushroom Rapid Application Developer. But for .NET and SQLite if possible.

    Read the article

  • Django LFS - custom views

    - by owca
    For all those ligthning fast shop users. I'm trying to implement my own first page view that will list all products from shop ( under '/' address). So I have a template : {% extends "lfs/shop/shop_base.html" %} {% block content %} <div id="najnowsze_produkty"> <ul> {% for obj in objects %} <li> {{ obj.name }} </li> {% endfor %} </ul> </div> {% endblock %} and then I've edited main shop view : from lfs.catalog.models import Category from lfs.catalog.models import Product def shop_view(request, template_name="lfs/shop/shop.html"): products = Product.objects.all() shop = lfs_get_object_or_404(Shop, pk=1) return render_to_response(template_name, RequestContext(request, { "shop" : shop, "products" : products })) but it just shows nothing. When I do Product.objects.all() query in shell I get results. Any ideas what could cause the problem ? Maybe I should filter products with 'active' status only ? But I'm not sure if it can influence all objects in any way.

    Read the article

  • Automation Error when exporting Excel data to SQL Server

    - by brohjoe
    I'm getting an Automation error upon running VBA code in Excel 2007. I'm attempting to connect to a remote SQL Server DB and load data to from Excel to SQL Server. The error I get is, "Run-time error '-2147217843(80040e4d)': Automation error". I checked out the MSDN site and it suggested that this may be due to a bug associated with the sqloledb provider and one way to mitigate this is to use ODBC. Well I changed the connection string to reflect ODBC provider and associated parameters and I'm still getting the same error. Here is the code with ODBC as the provider: Dim cnt As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Dim wbBook As Workbook Dim wsSheet As Worksheet Dim rnStart As Range Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object. Set cnt = New ADODB.Connection cnt.ConnectionString = "Driver={SQL Server}; Server=onlineSQLServer2010.foo.com; Database=fooDB;Uid=logonalready;Pwd='helpmeOB1';" cnt.Open On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0'," & _ "'Data Source=C:\Database.xlsx; Extended Properties=Excel 12.0')...[SalesOrders$]" cnt.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the connection objects Set rst = Nothing Set cnt = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set cnt = Nothing cnt.Close End If End Sub

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • Users in database server or database tables

    - by Batcat
    Hi all, I came across an interesting issue about client server application design. We have this browser based management application where it has many users using the system. So obvisously within that application we have an user management module within it. I have always thought having an user table in the database to keep all the login details was good enough. However, a senior developer said user management should be done in the database server layer if not then is poorly designed. What he meant was, if a user wants to use the application then a user should be created in the user table AND in the database server as a user account as well. So if I have 50 users using my applications, then I should have 50 database server user logins. I personally think having just one user account in the database server for this database was enough. Just grant this user with the allowed privileges to operate all the necessary operation need by the application. The users that are interacting with the application should have their user accounts created and managed within the database table as they are more related to the application layer. I don't see and agree there is need to create a database server user account for every user created for the application in the user table. A single database server user should be enough to handle all the query sent by the application. Really hope to hear some suggestions / opinions and whether I'm missing something? performance or security issues? Thank you very much.

    Read the article

< Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >