Search Results

Search found 18566 results on 743 pages for 'query hints'.

Page 419/743 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • Help with fql.multiQuery

    - by Daniel Schaffer
    I'm playing around with the Facebook API's fql.multiQuery method. I'm just using the API Test Console, and trying to get a successful response but can't seem to figure out exactly what it wants. Here's the text I'm entering into the "queries" field: {"tags" : "select subject from photo_tag where subject != 601599551 and pid in ( select pid from photo_tag where subject = 601599551 ) and subject in ( select uid2 from friend where uid1 = 601599551 )", "foo" : "select uid from user where uid = 601599551"} All it'll give me is a queries parameter: array expected. error. I've also tried just about every permutation I could think of involving wrapping the name/query pairs in their own curly braces, adding brackets, adding whitespace, removing whitespace in case it didn't want an associative array (for those watching the edits, I just found out about these wonderful things now... oy), all to no avail. Is there something painfully obvious I'm missing here, or do I need to make like Chuck Norris Jon Skeet and simply will it to do my bidding? Update: A note to anyone finding this question now: The fql.multiquery test console appears to be broken. You can test your query by clicking on the generated url in the test console and manually adding the "queries" parameter into the querystring.

    Read the article

  • PHP/SQL/Wordpress: Group a user list by alphabet

    - by rayne
    I want to create a (fairly big) Wordpress user index with the users categorized alphabetically, like this: A Amy Adam B Bernard Bianca and so on. I've created a custom Wordpress query which works fine for this, except for one problem: It also displays "empty" letters, letters where there aren't any users whose name begins with that letter. I'd be glad if you could help me fix this code so that it only displays the letter if there's actually a user with a name of that letter :) I've tried my luck by checking how many results there are for that letter, but somehow that's not working. (FYI, I use the user photo plugin and only want to show users in the list who have an approved picture, hence the stuff in the SQL query). <?php $alphabet = range('A', 'Z'); foreach ($alphabet as $letter) { $user_count = $wpdb->get_results("SELECT COUNT(*) FROM wp_users WHERE display_name LIKE '".$letter."%' ORDER BY display_name ASC"); if ($user_count > 0) { $user_row = $wpdb->get_results("SELECT wp_users.user_login, wp_users.display_name FROM wp_users, wp_usermeta WHERE wp_users.display_name LIKE '".$letter."%' AND wp_usermeta.meta_key = 'userphoto_approvalstatus' AND wp_usermeta.meta_value = '2' AND wp_usermeta.user_id = wp_users.ID ORDER BY wp_users.display_name ASC"); echo '<li class="letter">'.$letter.''; echo '<ul>'; foreach ($user_row as $user) { echo '<li><a href="/author/'.$user->user_login.'">'.$user->display_name.'</a></li>'; } echo '</ul></li>'; } } ?> Thanks in advance!

    Read the article

  • Making dtSearch highlight one hit per phrase, rather than one hit per word-in-a-phrase

    - by Chris
    I'm using dtSearch to highlight text search matches within a document. The code to do this, minus some details and cleanup, is roughly along these lines: SearchJob sj = new SearchJob(); sj.Request = "\"audit trail\""; // the user query sj.FoldersToSearch.Add(path_to_src_document); sj.Execute(); FileConverter fileConverter = new FileConverter(); fileConverter.SetInputItem(sj.Results, 0); fileConvert.BeforeHit = "<a name=\"HH_%%ThisHit%%\"/><b>"; fileConverter.AfterHit = "</b>"; fileConverter.Execute(); string myHighlightedDoc = fileConverter.OutputString; If I give dtSearch a quoted phrase query like "audit trail" then dtSearch will do hit highlighting like this: An <a name="HH_0"/><b>audit</b> <a name="HH_1"/><b>trail</b> is a fun thing to have an <a name="HH_2"/><b>audit</b> <a name="HH_last"/><b>trail</b> about! Note that each word of the phrase is highlighted separately. Instead, I would like phrases to get highlighted as whole units, like this: An <a name="HH_0"/><b>audit trail</b> is a fun thing to have an <a name="HH_last"/><b>audit trail</b> about! This would A) make highlighting look better, B) improve behavior of my javascript that helps users navigate from hit to hit, and C) give more accurate counts of total # hits. Is there good ways to make dtSearch highlight phrases this way?

    Read the article

  • get column names from a table where one of the column name is a key word.

    - by syedsaleemss
    Im using c# .net windows form application. I have created a database which has many tables. In one of the tables I have entered data. In this table I have 4 columns named key, name,age,value. Here the name "key" of the first column is a key word. Now I am trying to get these column names into a combo box. I am unable to get the name "key". It works for "key" when I use this code: private void comboseccolumn_SelectedIndexChanged(object sender, EventArgs e) { string dbname = combodatabase.SelectedItem.ToString(); string path = @"Data Source=" + textBox1.Text + ";Initial Catalog=" + dbname + ";Integrated Security=SSPI"; //string path=@"Data Source=SYED-PC\SQLEXPRESS;Initial Catalog=resources;Integrated Security=SSPI"; SqlConnection con = new SqlConnection(path); string tablename = comboBox2.SelectedItem.ToString(); //string query= "Select * from" +tablename+; //SqlDataAdapter adp = new SqlDataAdapter(" Select [Key] ,value from " + tablename, con); SqlDataAdapter adp = new SqlDataAdapter(" Select [" + combofirstcolumn.SelectedItem.ToString() + "]," + comboseccolumn.SelectedItem.ToString() + "\t from " + tablename, con); DataTable dt = new DataTable(); adp.Fill(dt); dataGridView1.DataSource = dt; } This is beacuse I am using "[" in the select query. But it wont work for non keys. Or if I remove the "[" it is not working for key . Please suggest me so that I can get both key as well as nonkey column names.

    Read the article

  • jQuery Autocomplete Json Ajax cross browser issue with Google Search Appliance

    - by skyfoot
    I am implementing a jquery autocomplete on a search form and am getting the suggestions from the Google Search Appliance Autocomple suggestions service which returns a result set in json. What I am trying to do is go off to the GSA to get suggestions when the user types something in the search box. The url to get the json suggestions is as follows: http://gsaurl/suggest?q=<query>&max=10&site=default_site&client=default_frontend&access=p&format=rich The json which is returned is as follows: { "query":"re", "results": [ {"name":"red", "type":"suggest"}, {"name":"read", "type":"suggest"}] } The jQuery autocomplete code is as follows: $(#q).autocomplete(searchUrl, { width: 320, dataType: 'json', highlight: false, scroll: true, scrollHeight: 300, parse: function(data) { var array = new Array(); for(var i=0;i<data.results.length;i++) { array[i] = { data: data.results[i], value: data.results[i].name, result: data.results[i].name }; } return array; }, formatItem: function(row) { return row.name; } }); This works in IE but fails in firefox as the data returned in the parse function is null. Any ideas why this would be the case? Workaround I created an aspx page to call the GSA suggest service and to return the json from the suggest service. Using this page as a proxy and setting it as the url in the jQuery autocomplete worked in both IE and FireFox.

    Read the article

  • Set up WLAN in 3-level house

    - by Balint Erdi
    I'm having a hard time setting up the network in our house. It has three levels (basement, ground floor, first level). The WLAN is set up by an ASUS RT-N12 router which provides perfect coverage for the ground floor and the basement. However, I set up my "home office" in the basement where the signal barely arrived. So I purchased a TP-Link TL-WA901ND (300 Mbps) Access Point which I set up in the other corner of the ground floor to expand the ASUS router's range. I used the AP's Repeater mode for that. The distance between my computer and the TP-Link AP is 6-7 meters. There is a staircase going down from the ground floor to the basement so there are no solid walls between the computer and the AP. This setup mostly works (I am writing this from the basement) but it is not reliable (the signal strength sometimes goes down to ~40% of the max) sometimes so I wonder if I am doing it correctly or if there is a better way. Screenshot of the router's and the AP's dashboard screen follow: Any comments on what I am doing wrong or hints for improvement are appreciated. Thank you. UPDATE Tried one more thing, setting up the TP-LINK AP in Access Point mode. That way, I can make it use a different SSID. I enabled WDS/Bridge so that it expands the range of the ASUS router (see screenshot). That does not work, either, if I connect to the network set up by the TP-LINK device (PELSTER-2), I can't reach the external network (the Internet). It seems the problem always comes back to this, the TP-LINK does not have access to the external network, whatever its mode of operation.

    Read the article

  • Force spin-down of external hard-drive on linux (raspberry pi)

    - by user258346
    I'm currently setting up a home-server using a Raspberry Pi with an external hard-disk connected via usb. However, my hard-drive will never spin down when being idle. I tried already the hints provided at raspberrypi.org ... without any success. 1.) sudo hdparm -S5 /dev/sda returns /dev/sda: setting standby to 5 (25 seconds) SG_IO: bad/missing sense data, sb[]: 70 00 04 00 00 00 00 0a 00 00 00 00 44 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2.) sudo hdparm -y /dev/sda returns /dev/sda: issuing standby command SG_IO: bad/missing sense data, sb[]: 70 00 04 00 00 00 00 0a 00 00 00 00 44 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ...and 3.) sudo sdparm --flexible --command=stop /dev/sda returns /dev/sda: HDD 1234 ... without spin-down of the drive. I use the following hardware: Inateck FDU3C-2 dual Ports USB 3.0 HDD docking station Western Digital WD10EZRX Green 1TB Is it possible, that the sent spin-down-signals are somewhere overwritten/lost/ignored?

    Read the article

  • Simple, fast SQL queries for flat files.

    - by plinehan
    Does anyone know of any tools to provide simple, fast queries of flat files using a SQL-like declarative query language? I'd rather not pay the overhead of loading the file into a DB since the input data is typically thrown out almost immediately after the query is run. Consider the data file, "animals.txt": dog 15 cat 20 dog 10 cat 30 dog 5 cat 40 Suppose I want to extract the highest value for each unique animal. I would like to write something like: cat animals.txt | foo "select $1, max(convert($2 using decimal)) group by $1" I can get nearly the same result using sort: cat animals.txt | sort -t " " -k1,1 -k2,2nr And I can always drop into awk from there, but this all feels a bit awkward (couldn't resist) when a SQL-like language would seem to solve the problem so cleanly. I've considered writing a wrapper for SQLite that would automatically create a table based on the input data, and I've looked into using Hive in single-processor mode, but I can't help but feel this problem has been solved before. Am I missing something? Is this functionality already implemented by another standard tool? Halp!

    Read the article

  • What is the best way to update an unattached entity on Entity Framework?

    - by Carlos Loth
    Hi, In my project I have some data classes to retrieve data from the database using the Entity Framework. We called these classes *EntityName*Manager. All of them have a method to retrieve entities from database and they behave most like this: static public EntityA SelectByName(String name) { using (var context = new ApplicationContext()) { var query = from a in context.EntityASet where a.Name == name select a; try { var entityA = query.First(); context.Detach(entityA); return entityA; } catch (InvalidOperationException ex) { throw new DataLayerException( String.Format("The entityA whose name is '{0}' was not found.", name), ex); } } } You can see that I detach the entity before return it to the method caller. So, my question is "what is the best way to create an update method on my *EntityA*Manager class?" I'd like to pass the modified entity as a parameter of the method. But I haven't figured out a way of doing it without going to the database and reload the entity and update its values inside a new context. Any ideas? Thanks in advance, Carlos Loth.

    Read the article

  • Why can't Doctrine retrieve my model data?

    - by scottm
    So, I'm trying to use Doctrine to retrieve some data. I have some basic code like this: $conn = Doctrine_Manager::connection(CONNECTION_STRING); $site = Doctrine_Core::getTable('Site')->find('00024'); echo $site->SiteName; However, this keeps throwing a SQL error that 'column siteid does not exist'. When I look at the exception the SQL query is this (you can see the error is that the inner_tbl alias for siteid is set to s__siteid, so querying inner_tabl.siteid is what's broken): SELECT TOP 1 [inner_tbl].[siteid] AS [s__siteid] FROM (SELECT TOP 1 [s].[siteid] AS [s__siteid], [s].[name] AS [s__name], [s].[address] AS [s__address], [s].[city] AS [s__city], [s].[zip] AS [s__zip], [s].[state] AS [s__state], [s].[region] AS [s__region], [s].[callprocessor] AS [s__callprocessor], [s].[active] AS [s__active], [s].[dateadded] AS [s__dateadded] FROM [Sites] [s] WHERE ([s].[siteid] = '00024') ) AS [inner_tbl] Why is the query being generated this way? Could it be the way the Yaml schema is laid out? Site: connection: 0 tableName: Sites columns: siteid: type: string(5) fixed: true unsigned: false primary: true autoincrement: false name: type: string(300) fixed: false unsigned: false notnull: true primary: false autoincrement: false address: type: string(100) fixed: false unsigned: false notnull: false primary: false autoincrement: false city: type: string(100) fixed: false unsigned: false notnull: false primary: false autoincrement: false zip: type: string(5) fixed: false unsigned: false notnull: false primary: false autoincrement: false state: type: string(2) fixed: true unsigned: false notnull: true primary: false autoincrement: false region: type: integer(4) fixed: false unsigned: false notnull: true default: (5) primary: false autoincrement: false callprocessor: type: integer(4) fixed: false unsigned: false notnull: true primary: false autoincrement: false active: type: integer(1) fixed: false unsigned: false notnull: true primary: false autoincrement: false dateadded: type: timestamp(16) fixed: false unsigned: false notnull: true default: (getdate()) primary: false autoincrement: false

    Read the article

  • PHP mysqli Insert not working, but not giving any errors.

    - by asdasdas
    As the title says Im trying to do a simple insert, but nothing actually is inserted into the table. I try to print out errors, but nothing is reported. My users table has many more fields than these 4, but they should all default. $query = 'INSERT INTO users (username, password, level, name) VALUES (?, ?, ?, ?)'; if($stmt = $db -> prepare($query)) { $stmt -> bind_param('ssis', $username, $password, $newlevel, $realname); $stmt -> execute(); $stmt -> close(); echo 'Any Errors: '.$db->error.PHP_EOL; } There are no errors given, but when I go to look at the table in phpmyadmin there is not a new row added. I know for sure that the types are correct (strings and integers). Is there something really wrong here or does it have something to do with the fact that I'm ignoring other columns. I have about 8 columns in the user table.

    Read the article

  • GAE python database object design for simple list of values

    - by Joey
    I'm really new to database object design so please forgive any weirdness in my question. Basically, I am use Google AppEngine (Python) and contructing an object to track user info. One of these pieces of data is 40 Achievement scores. Do I make a list of ints in the User object for this? Or do I make a separate entity with my user id, the achievement index (0-39) and the score and then do a query to grab these 40 items every time I want to get the user data in total? The latter approach seems more object oriented to me, and certainly better if I extend it to have more than just scores for these 40 achievements. However, considering that I might not extend it, should I even consider just doing a simple list of 40 ints in my user data? I would then forgo doing a query, getting the sorted list of achievements, reading the score from each one just to process a response etc. Is doing this latter approach just such a common practice and hand-waved as not even worth batting an eyelash at in terms of thinking it might be more costly or complex processing wise?

    Read the article

  • Attributes of attributevalue element in SAML 2 Attribute Statement

    - by AJ
    I am building a web service that receives a SAML attribute query and responds with an attribute statement. I know I can return one or multiple values of a SAML attribute. I have some values that are dependent on the other attribute values. I need to show that relationship. Let us say, the query is for the Subject Dave and the return values are his company and job title. Dave can work at multiple companies with job title at each company. I have two options of sending this data back: Send this as a complextype by defining an attribute organization and return xml within that attribute. <saml:Attribute name="company"> <saml:AttributeValue> <company name="company1" jobtitle="CIO"/> <company name="company2" jobtitle="VP"/> </saml:AttributeValue> Try to send multiple values of attributes somehow sending a reference in attributevalue element. <saml:Attribute name="company"> <attributeValue>company1</attributeValue> <attributeValue>company2</attributeValue> </saml:Attribute> <saml:Attribute name="jobTitle> <attributeValue company="company1">CIO</attributeValue> <attributeValue company="company2">VP</attributeValue> </saml:Attribute> Which approach will you prefer? Why? I am biased towards second approach as it does not require client to know about any schema. It does require them to know about non-standard attribute company in the attribute value.

    Read the article

  • [Repost-ish] Impossibly slow queries, Tables indexed, How can I speed it up?

    - by colorfulgrayscale
    Hi guys, I posted a little earlier on here at http://stackoverflow.com/questions/2656837/query-results-taking-too-long-on-200k-database-speed-up-tips asking about slow executing SQL queries. I was told to index the columns; I did. and its still slow (slow as in, i never see the results, both mysql and sqlite freeze up on query). Help would be greatly appreciated. Here is the SQL SELECT equipment.`unitID` AS `equipment_unitID`, equipment.`fleetCode` AS `equipment_fleetCode`, equipment.type AS equipment_type, equipment.tiremap AS equipment_tiremap, tiremap.`TireID` AS `tiremap_TireID`, tiremap.`WorkMap` AS `tiremap_WorkMap`, tiremap.`Position` AS `tiremap_Position`, tiremap.`DepthMap` AS `tiremap_DepthMap`, tiremap.timestamp AS tiremap_timestamp, workreference.`aMap` AS `workreference_aMap`, workreference.`bMap` AS `workreference_bMap`, tirework.`RO` AS `tirework_RO`, tirework.location AS tirework_location, tirework.mileage AS tirework_mileage, tirework.`mechanicCode` AS `tirework_mechanicCode`, tirework.`partNumber` AS `tirework_partNumber`, tirework.`historyID` AS `tirework_historyID`, tirework.workmap AS tirework_workmap, tirework.timestamp AS tirework_timestamp FROM equipment, tiremap, workreference, tirework WHERE equipment.tiremap = tiremap.`TireID` AND tiremap.`WorkMap` = workreference.`aMap` AND workreference.`bMap` = tirework.workmap LIMIT 5 and here is the EXPLAIN for it id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE equipment ALL tiremap 14079 1 SIMPLE tiremap ref PRIMARY,WorkMap,TireID,WorkMap_2 PRIMARY 52 tire.equipment.tiremap 3 1 SIMPLE workreference ref aMap,bMap aMap 52 tire.tiremap.WorkMap 1 1 SIMPLE tirework eq_ref NewIndex1 NewIndex1 52 tire.workreference.bMap 1

    Read the article

  • using JMock to write unit test for a simple spring JDBC DAO

    - by Quincy
    I'm writing an unit test for spring jdbc dao. The method to test is: public long getALong() { return simpleJdbcTemplate.queryForObject("sql query here", new RowMapper<Long>() { public Long mapRow(ResultSet resultSet, int i) throws SQLException { return resultSet.getLong("a_long"); } }); } Here is what I have in the test: public void testGetALong() throws Exception { final Long result = 1000L; context.checking(new Expectations() {{ oneOf(simpleJdbcTemplate).queryForObject("sql_query", new RowMapper<Long>() { public Long mapRow(ResultSet resultSet, int i) throws SQLException { return resultSet.getLong("a_long"); } }); will(returnValue(result)); }}); Long seq = dao.getALong(); context.assertIsSatisfied(); assertEquals(seq, result); } Naturally, the test doesn't work (otherwise, I wouldn't be asking this question here). The problem is the rowmapper in the test is different from the rowmapper in the DAO. So the expectation is not met. I tried to put with around the sql query and with(any(RowMapper.class)) for the rowmapper. It wouldn't work either, complains about "not all parameters were given explicit matchers: either all parameters must be specified by matchers or all must be specified by values, you cannot mix matchers and values"

    Read the article

  • Banning by IP with php/mysql

    - by incrediman
    I want to be able to ban users by IP. My idea is to keep a list of IP's as rows in an BannedIPs table (the IP column would be an index). To check users' IP's against the table, I will keep a session variable called $_SESSION['IP'] for each session. If on any request, $_SESSION['IP'] doesn't match $_SERVER['REMOTE_ADDR'], I will update $_SESSION['IP'] and check the BannedIPs table to see if the IP is banned. (A flag will also be saved as a session variable specifying whether or not the user is banned) Here are the things I'm wondering: Does that sound like a good strategy with regards to speed and security (would someone be able to get around the IP ban somehow, other than changing IP's)? What's the best way to structure a mysql query that checks to see if a row exists? That is, what's the best way to query the db to see if a row with a certain IP exists (to check if it's banned)? Should I save the IP's as integers or strings? Note that... I estimate there will be between 1,000-10,000 banned IP's stored in the database. $_SERVER['REMOTE_ADDR'] is the IP from which the current request was sent.

    Read the article

  • Recover LVM2 volume group after one HDD failed

    - by Bernd
    I had two HDDs, each one containing a LVM partition which formed a volume group. Then I had two LVs, one for my / directory and one for my /home/ directory. Yesterday where I had my / dir failed. I'm trying to recover at least my /home/ dir. What I've done so far: Boot a live system Extract LVM2 metadata from the working HDD using dd Copy metadata to /etc/lvm/backup/vg0 Now I'm trying to do this: pvcreate --restore /etc/lvm/backup/vg0 --uuid "[uuid of my working hdd]" /dev/sdb2 But I always get: Couldn't find device with uuid '[uuid of broken hdd]'. Couldn't find device with uuid '[uuid of working hdd]'. Device /dev/sdb2 not found (or ignored by filtering). I confirmed that /dev/sdb2 exists and I've commented out all filtering settings from /etc/lvm/lvm.conf so I don't know what might be causing pvcreate not to find the device. So: What might be the problem? Is it even possible to restore this partition? (As I'm writing this I'm starting to think it's impossible D:) Edit: Okay, looks like I've got it figured out. I was using a Ubuntu 8.10 CD (yeah, I know it's not supported anymore) and it seems that was the problem. When I started from a Ubuntu 10.04 CD everything worked 'fine', I could mount my LVM partitions partially without problems. (Will answer the question in 4 hours. But if anyone has still got some hints/tips, please share! :)

    Read the article

  • traversal of multiple separate web services in a ring network

    - by qkrsppopcmpt
    I am facing a design problem, here is some basic requirement: Aggregator 1. Separate service for blog,video,images and associations. 2. Each of the service should be completely separate, that means they run on separate tomcat. 3. And each aggregator must be able to query local database and other aggregators 4. Traversal of services must be asynchronous using a ring network. For example, we have a ring like ws1-ws2-ws3-ws4-ws1. Each node represents one type of one aggregator. The traveral goes in this way: the query from ws1 to ws2, and ws1 is waiting for the response from ws2 asynchronously; ws2 to ws3, also ws2 wait for ws3 asynchronously. If ws3 has the data, reply to ws2 then to ws1, then reply. However if ws3 goes away, the traversal should go back to ws2, then to ws1, then go to ws4, then go to ws3 again. then tells ws4 since ws3 fails. The required technology is axis2 and tomcat 6. Does anybody have any clue about it? If it is clear, I can clarify the question more clearly. Thanks very much.

    Read the article

  • DjangoUnicodeDecodeError while storing pickle'd data.

    - by Jack M.
    I've got a simple dict object I'm trying to store in the database after it has been run through pickle. It seems that Django doesn't like trying to encode this error. I've checked with MySQL, and the query isn't even getting there before it is throwing the error, so I don't believe that is the problem. The dict I'm storing looks like this: { 'ordered': [ { 'value': u'First\xd1ame Last\xd1ame', 'label': u'Full Name' }, { 'value': u'123-456-7890', 'label': u'Phone Number' }, { 'value': u'[email protected]', 'label': u'Email Address' } ], 'cleaned_data': { u'Phone Number': u'123-456-7890', u'Full Name': u'First\xd1ame Last\xd1ame', u'Email Address': u'[email protected]' }, 'post_data': <QueryDict: { u'Phone Number': [u'1234567890'], u'Full Name_1': [u'Last\xd1ame'], u'Full Name_0': [u'First\xd1ame'], u'Email Address': [u'[email protected]'] }>, 'user': <User: itis> } The error that gets thrown is: 'utf8' codec can't decode bytes in position 52-53: invalid data. Position 52-53 is the first instance of \xd1 (Ñ) in the pickled data. So far, I've dug around StackOverflow and found a few questions where the database encoding for the objects was wrong. This doesn't help me because there is no MySQL query yet. This is happening before the database. Google also didn't help much when searching for unicode errors on pickled data. It is probably worth mentioning that if I don't use the Ñ, this code works fine.

    Read the article

  • SQL Server Search Proper Names Full Text Index vs LIKE + SOUNDEX

    - by Matthew Talbert
    I have a database of names of people that has (currently) 35 million rows. I need to know what is the best method for quickly searching these names. The current system (not designed by me), simply has the first and last name columns indexed and uses "LIKE" queries with the additional option of using SOUNDEX (though I'm not sure this is actually used much). Performance has always been a problem with this system, and so currently the searches are limited to 200 results (which still takes too long to run). So, I have a few questions: Does full text index work well for proper names? If so, what is the best way to query proper names? (CONTAINS, FREETEXT, etc) Is there some other system (like Lucene.net) that would be better? Just for reference, I'm using Fluent NHibernate for data access, so methods that work will with that will be preferred. I'm using SQL Server 2008 currently. EDIT I want to add that I'm very interested in solutions that will deal with things like commonly misspelled names, eg 'smythe', 'smith', as well as first names, eg 'tomas', 'thomas'. Query Plan |--Parallelism(Gather Streams) |--Nested Loops(Inner Join, OUTER REFERENCES:([testdb].[dbo].[Test].[Id], [Expr1004]) OPTIMIZED WITH UNORDERED PREFETCH) |--Hash Match(Inner Join, HASH:([testdb].[dbo].[Test].[Id])=([testdb].[dbo].[Test].[Id])) | |--Bitmap(HASH:([testdb].[dbo].[Test].[Id]), DEFINE:([Bitmap1003])) | | |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([testdb].[dbo].[Test].[Id])) | | |--Index Seek(OBJECT:([testdb].[dbo].[Test].[IX_Test_LastName]), SEEK:([testdb].[dbo].[Test].[LastName] >= 'WHITDþ' AND [testdb].[dbo].[Test].[LastName] < 'WHITF'), WHERE:([testdb].[dbo].[Test].[LastName] like 'WHITE%') ORDERED FORWARD) | |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([testdb].[dbo].[Test].[Id])) | |--Index Seek(OBJECT:([testdb].[dbo].[Test].[IX_Test_FirstName]), SEEK:([testdb].[dbo].[Test].[FirstName] >= 'THOMARþ' AND [testdb].[dbo].[Test].[FirstName] < 'THOMAT'), WHERE:([testdb].[dbo].[Test].[FirstName] like 'THOMAS%' AND PROBE([Bitmap1003],[testdb].[dbo].[Test].[Id],N'[IN ROW]')) ORDERED FORWARD) |--Clustered Index Seek(OBJECT:([testdb].[dbo].[Test].[PK__TEST__3214EC073B95D2F1]), SEEK:([testdb].[dbo].[Test].[Id]=[testdb].[dbo].[Test].[Id]) LOOKUP ORDERED FORWARD) SQL for above: SELECT * FROM testdb.dbo.Test WHERE LastName LIKE 'WHITE%' AND FirstName LIKE 'THOMAS%' Based on advice from Mitch, I created an index like this: CREATE INDEX IX_Test_Name_DOB ON Test (LastName ASC, FirstName ASC, BirthDate ASC) INCLUDE (and here I list the other columns) My searches are now incredibly fast for my typical search (last, first, and birth date).

    Read the article

  • ASUS EAH5450 Graphics Card (ATI Radeon HD5450 - 1 GB DDR3) on Windows 2003? Anybody got it to work?

    - by JJarava
    Hi all! I've just bought an ASUS EAH5450 Graphics Card (ATI Radeon HD5450, 1 GB DDR3) for my main system, but I haven't been able to make it work under Windows 2003 (my OS in that system). When I plugged the card, I got a couple of "installing drivers" prompt for things such as "ATI High Definition Audio Device" that got themselves sorted out of the Internet, and then a "Standard VGA Graphics Adapter". The CD that came with the card installs something called "ATI Catalyst Install Manager" and .net 2.0, but no drivers. I've downloaded the latest (WinXP 32bits) drivers from ATI, and the experience is the same: I don't get any drivers installed. My Motherboard is an ASUS A8N-SLI with nVidia nForce 4 chipset (for an Athlon 64X2, somewhat old), but my previous card was an ATi Radeon X700, so it's been working with ATI cards before. On POST, during boot I see a "Display Card" Device (Vendor ID 1002-68F9-0300) and a "Multimedia Device" (1002-AA68-0403), and when viewing the properties of the "Standard VGA", they match the device ID. Any hints? I'd really hate having to get rid of the card, and I'm sure it's not that strange what I'm trying to do...

    Read the article

  • Can you notice what's wrong with my PHP or MYSQL code?

    - by Jenna
    I am trying to create a category menu with sub categories. I have the following MySQL table: -- -- Table structure for table `categories` -- CREATE TABLE IF NOT EXISTS `categories` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(1000) NOT NULL, `slug` varchar(1000) NOT NULL, `parent` int(11) NOT NULL, `type` varchar(255) NOT NULL, PRIMARY KEY (`ID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=66 ; -- -- Dumping data for table `categories` -- INSERT INTO `categories` (`ID`, `name`, `slug`, `parent`, `type`) VALUES (63, 'Party', '/category/party/', 0, ''), (62, 'Kitchen', '/category/kitchen/', 61, 'sub'), (59, 'Animals', '/category/animals/', 0, ''), (64, 'Pets', '/category/pets/', 59, 'sub'), (61, 'Rooms', '/category/rooms/', 0, ''), (65, 'Zoo Creatures', '/category/zoo-creatures/', 59, 'sub'); And the following PHP: <?php include("connect.php"); echo "<ul>"; $query = mysql_query("SELECT * FROM categories"); while ($row = mysql_fetch_assoc($query)) { $catId = $row['id']; $catName = $row['name']; $catSlug = $row['slug']; $parent = $row['parent']; $type = $row['type']; if ($type == "sub") { $select = mysql_query("SELECT name FROM categories WHERE ID = $parent"); while ($row = mysql_fetch_assoc($select)) { $parentName = $row['name']; } echo "<li>$parentName >> $catName</li>"; } else if ($type == "") { echo "<li>$catName</li>"; } } echo "</ul>"; ?> Now Here's the Problem, It displays this: * Party * Rooms >> Kitchen * Animals * Animals >> Pets * Rooms * Animals >> Zoo Creatures I want it to display this: * Party * Rooms >> Kitchen * Animals >> Pets >> Zoo Creatures Is there something wrong with my loop? I just can't figure it out.

    Read the article

  • GAE modeling relationship options

    - by Sway
    Hi there, I need to model the following situation and I can't seem to find a consistent example on how to do it "correctly" for the google app engine. Suppose I've got a simple situation like the following: [Company] 1 ----- M [Stare] A company has one to many stores. Each store has an address made up of a address line 1, city, state, country, postcode etc. Ok. Lets say we need to create say an "Audit". An Audit is for a company and can be across one to many stares. So something like: [Audit] 1 ------ 1 [Company] 1 ------ M [Store] Now we need to query all of the "audits" based on the Store "addresses" in order to send the "Auditors" to the right locations. There seem to be numerous articles like this one: http://code.google.com/appengine/articles/modeling.html Which give examples of creating a "ContactCompany" model class. However they also say that you should use this kind of relationship only when you "really need to" and with "care" for performance. I've also read - frequently - that you should denormalize as much as possible thereby moving all of the "query-able" data into the Audit class. So what would you suggest as the best way to solve this? I've seen that there is an Expando class but I'm not sure if that is the "best" option for this. Any help or thoughts on this would be totally appreciated. Thanks in advance, Matt

    Read the article

  • Twitter Typeahead only shows only 5 results

    - by user3685388
    I'm using the Twitter Typeahead version 0.10.2 autocomplete but I'm only receiving 5 results from my JSON result set. I can have 20 or more results but only 5 are shown. What am I doing wrong? var engine = new Bloodhound({ name: "blackboard-names", prefetch: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false } }, remote: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false }, }, datumTokenizer: Bloodhound.tokenizers.obj.whitespace('value'), queryTokenizer: Bloodhound.tokenizers.whitespace }); var promise = engine.initialize(); promise .done(function() { console.log("done"); }) .fail(function() { console.log("fail"); }); $("#Impersonate").typeahead({ minLength: 2, highlight: true}, { name: "blackboard-names", displayKey: 'value', source: engine.ttAdapter() }).bind("typeahead:selected", function(obj, datum, name) { console.log(obj, datum, name); alert(datum.id); }); Data: [ { "id": "1", "value": "Adams, Abigail", "tokens": [ "Adams", "A", "Ad", "Ada", "Abigail", "A", "Ab", "Abi" ] }, { "id": "2", "value": "Adams, Alan", "tokens": [ "Adams", "A", "Ad", "Ada", "Alan", "A", "Al", "Ala" ] }, { "id": "3", "value": "Adams, Alison", "tokens": [ "Adams", "A", "Ad", "Ada", "Alison", "A", "Al", "Ali" ] }, { "id": "4", "value": "Adams, Amber", "tokens": [ "Adams", "A", "Ad", "Ada", "Amber", "A", "Am", "Amb" ] }, { "id": "5", "value": "Adams, Amelia", "tokens": [ "Adams", "A", "Ad", "Ada", "Amelia", "A", "Am", "Ame" ] }, { "id": "6", "value": "Adams, Arik", "tokens": [ "Adams", "A", "Ad", "Ada", "Arik", "A", "Ar", "Ari" ] }, { "id": "7", "value": "Adams, Ashele", "tokens": [ "Adams", "A", "Ad", "Ada", "Ashele", "A", "As", "Ash" ] }, { "id": "8", "value": "Adams, Brady", "tokens": [ "Adams", "A", "Ad", "Ada", "Brady", "B", "Br", "Bra" ] }, { "id": "9", "value": "Adams, Brandon", "tokens": [ "Adams", "A", "Ad", "Ada", "Brandon", "B", "Br", "Bra" ] } ]

    Read the article

  • Is this a File Header / Magic Number?

    - by Hammer Bro.
    I've got 120,000 files (way more, actually; this is just an arbitrary subset) of an unknown type. Linux file does not identify them (not that they're necessarily Linux files), nor do any other methods I've tried. There are only two hints about them that I currently have. One is that I suspect some compression is employed -- I have metadata that claims the file sizes are always some amount larger than what I observe. The other is that in 100,000 of these files, the first 16 bytes are always: ff ee ee dd 00 00 00 00 01 00 00 00 00 00 00 00 That really looks like a file header/magic number to me, but I just can't place it. Does anyone know what kind of files this would indicate? Alternatively, can anyone convince me that these suspiciously common bytes certainly do not indicate a specific file type? UPDATE I don't know the exact reverse-engineering details, but most of the files in our case are zips after the first 29(? or so) bytes are ignored. So in practice the problem is solved (we know how to process the files) but in theory the question is still unanswered -- I don't know which application routinely prepends about 29 bytes to its zips. [I'm not sure if I should leave the question open or not at this point.]

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >