Search Results

Search found 22901 results on 917 pages for 'query bug'.

Page 508/917 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • PHP mysqli Insert not working, but not giving any errors.

    - by asdasdas
    As the title says Im trying to do a simple insert, but nothing actually is inserted into the table. I try to print out errors, but nothing is reported. My users table has many more fields than these 4, but they should all default. $query = 'INSERT INTO users (username, password, level, name) VALUES (?, ?, ?, ?)'; if($stmt = $db -> prepare($query)) { $stmt -> bind_param('ssis', $username, $password, $newlevel, $realname); $stmt -> execute(); $stmt -> close(); echo 'Any Errors: '.$db->error.PHP_EOL; } There are no errors given, but when I go to look at the table in phpmyadmin there is not a new row added. I know for sure that the types are correct (strings and integers). Is there something really wrong here or does it have something to do with the fact that I'm ignoring other columns. I have about 8 columns in the user table.

    Read the article

  • Can I make Gimp 2.8.4 in OSx make the file I just opened be the top window automatically?

    - by joe larson
    When I open an image file in Gimp 2.8.4 in OSx Mountain Lion 10.8.4 via the File Open menu, it does not automatically make that image the top window, requiring that I then find that window via the Window menu. Very jarring UX! Is there a way to modify this behavior so that it behaves like most GUI applications and brings the document just opened to the fore? Is this a bug in Gimp 2.8? Interestingly in my Ubuntu 12.04 with Gimp 2.6.12 this is not a problem. Since I haven't used Gimp much at all until this month, I don't know if this is a new problem or not.

    Read the article

  • Why can't Doctrine retrieve my model data?

    - by scottm
    So, I'm trying to use Doctrine to retrieve some data. I have some basic code like this: $conn = Doctrine_Manager::connection(CONNECTION_STRING); $site = Doctrine_Core::getTable('Site')->find('00024'); echo $site->SiteName; However, this keeps throwing a SQL error that 'column siteid does not exist'. When I look at the exception the SQL query is this (you can see the error is that the inner_tbl alias for siteid is set to s__siteid, so querying inner_tabl.siteid is what's broken): SELECT TOP 1 [inner_tbl].[siteid] AS [s__siteid] FROM (SELECT TOP 1 [s].[siteid] AS [s__siteid], [s].[name] AS [s__name], [s].[address] AS [s__address], [s].[city] AS [s__city], [s].[zip] AS [s__zip], [s].[state] AS [s__state], [s].[region] AS [s__region], [s].[callprocessor] AS [s__callprocessor], [s].[active] AS [s__active], [s].[dateadded] AS [s__dateadded] FROM [Sites] [s] WHERE ([s].[siteid] = '00024') ) AS [inner_tbl] Why is the query being generated this way? Could it be the way the Yaml schema is laid out? Site: connection: 0 tableName: Sites columns: siteid: type: string(5) fixed: true unsigned: false primary: true autoincrement: false name: type: string(300) fixed: false unsigned: false notnull: true primary: false autoincrement: false address: type: string(100) fixed: false unsigned: false notnull: false primary: false autoincrement: false city: type: string(100) fixed: false unsigned: false notnull: false primary: false autoincrement: false zip: type: string(5) fixed: false unsigned: false notnull: false primary: false autoincrement: false state: type: string(2) fixed: true unsigned: false notnull: true primary: false autoincrement: false region: type: integer(4) fixed: false unsigned: false notnull: true default: (5) primary: false autoincrement: false callprocessor: type: integer(4) fixed: false unsigned: false notnull: true primary: false autoincrement: false active: type: integer(1) fixed: false unsigned: false notnull: true primary: false autoincrement: false dateadded: type: timestamp(16) fixed: false unsigned: false notnull: true default: (getdate()) primary: false autoincrement: false

    Read the article

  • GAE python database object design for simple list of values

    - by Joey
    I'm really new to database object design so please forgive any weirdness in my question. Basically, I am use Google AppEngine (Python) and contructing an object to track user info. One of these pieces of data is 40 Achievement scores. Do I make a list of ints in the User object for this? Or do I make a separate entity with my user id, the achievement index (0-39) and the score and then do a query to grab these 40 items every time I want to get the user data in total? The latter approach seems more object oriented to me, and certainly better if I extend it to have more than just scores for these 40 achievements. However, considering that I might not extend it, should I even consider just doing a simple list of 40 ints in my user data? I would then forgo doing a query, getting the sorted list of achievements, reading the score from each one just to process a response etc. Is doing this latter approach just such a common practice and hand-waved as not even worth batting an eyelash at in terms of thinking it might be more costly or complex processing wise?

    Read the article

  • Attributes of attributevalue element in SAML 2 Attribute Statement

    - by AJ
    I am building a web service that receives a SAML attribute query and responds with an attribute statement. I know I can return one or multiple values of a SAML attribute. I have some values that are dependent on the other attribute values. I need to show that relationship. Let us say, the query is for the Subject Dave and the return values are his company and job title. Dave can work at multiple companies with job title at each company. I have two options of sending this data back: Send this as a complextype by defining an attribute organization and return xml within that attribute. <saml:Attribute name="company"> <saml:AttributeValue> <company name="company1" jobtitle="CIO"/> <company name="company2" jobtitle="VP"/> </saml:AttributeValue> Try to send multiple values of attributes somehow sending a reference in attributevalue element. <saml:Attribute name="company"> <attributeValue>company1</attributeValue> <attributeValue>company2</attributeValue> </saml:Attribute> <saml:Attribute name="jobTitle> <attributeValue company="company1">CIO</attributeValue> <attributeValue company="company2">VP</attributeValue> </saml:Attribute> Which approach will you prefer? Why? I am biased towards second approach as it does not require client to know about any schema. It does require them to know about non-standard attribute company in the attribute value.

    Read the article

  • Powershell overruling Perl binmode?

    - by hippietrail
    I have a Perl script which creates a binary file while scanning a very large text file. It outputs to STDOUT which I redirect in the commandline to a file. To optimize it I'm making changes then seeing how low it takes to run. On Linux for this I use the "time" command. On Windows the best way to time a program seemed to be to PowerShell's "measure-command". This seemed to work fine but I noticed the generated files were larger. On examination I found that the files generated from within PowerShell begin with a BOM and contain CRLF pairs! My Perl script has a "binmode STDOUT" directive and does work correctly in a normal dosbox. Is this a bug or misfeature in PowerShell or measure-command? Has it affected others creating binary files by means other than Perl? Googling hasn't turned anything up so far. I'm using Perl 5.12, PowerShell v1.0 and Windows XP.

    Read the article

  • Agent admitted failure to sign using the key.

    - by Delirium tremens
    .ssh dir is chmodded 700, id_rsa.pub 600, id_rsa 400. I ran ssh-keygen -t rsa, imported key to launchpad and ran bzr branch lp:unity, but got error message: Agent admitted failure to sign using the key. Permission denied (publickey). bzr: ERROR: Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist. auth.log: Nov 28 20:23:13 ubuntu sudo: deltrem : TTY=pts/0 ; PWD=/home/deltrem/Documentos/repositories ; USER=root ; COMMAND=/usr/bin/bzr branch lp:unity Nov 28 20:39:01 ubuntu CRON[2959]: pam_unix(cron:session): session opened for user root by (uid=0) Nov 28 20:39:01 ubuntu CRON[2959]: pam_unix(cron:session): session closed for user root Nov 28 20:41:04 ubuntu gnome-screensaver-dialog: gkr-pam: unlocked login keyring

    Read the article

  • [Repost-ish] Impossibly slow queries, Tables indexed, How can I speed it up?

    - by colorfulgrayscale
    Hi guys, I posted a little earlier on here at http://stackoverflow.com/questions/2656837/query-results-taking-too-long-on-200k-database-speed-up-tips asking about slow executing SQL queries. I was told to index the columns; I did. and its still slow (slow as in, i never see the results, both mysql and sqlite freeze up on query). Help would be greatly appreciated. Here is the SQL SELECT equipment.`unitID` AS `equipment_unitID`, equipment.`fleetCode` AS `equipment_fleetCode`, equipment.type AS equipment_type, equipment.tiremap AS equipment_tiremap, tiremap.`TireID` AS `tiremap_TireID`, tiremap.`WorkMap` AS `tiremap_WorkMap`, tiremap.`Position` AS `tiremap_Position`, tiremap.`DepthMap` AS `tiremap_DepthMap`, tiremap.timestamp AS tiremap_timestamp, workreference.`aMap` AS `workreference_aMap`, workreference.`bMap` AS `workreference_bMap`, tirework.`RO` AS `tirework_RO`, tirework.location AS tirework_location, tirework.mileage AS tirework_mileage, tirework.`mechanicCode` AS `tirework_mechanicCode`, tirework.`partNumber` AS `tirework_partNumber`, tirework.`historyID` AS `tirework_historyID`, tirework.workmap AS tirework_workmap, tirework.timestamp AS tirework_timestamp FROM equipment, tiremap, workreference, tirework WHERE equipment.tiremap = tiremap.`TireID` AND tiremap.`WorkMap` = workreference.`aMap` AND workreference.`bMap` = tirework.workmap LIMIT 5 and here is the EXPLAIN for it id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE equipment ALL tiremap 14079 1 SIMPLE tiremap ref PRIMARY,WorkMap,TireID,WorkMap_2 PRIMARY 52 tire.equipment.tiremap 3 1 SIMPLE workreference ref aMap,bMap aMap 52 tire.tiremap.WorkMap 1 1 SIMPLE tirework eq_ref NewIndex1 NewIndex1 52 tire.workreference.bMap 1

    Read the article

  • How do I minor upgrade MySQL on Windows?

    - by TruMan1
    I currently have MySQL 5.1.35 installed on a Windows 2008 server via the MSI installer. I need to upgrade to the latest 5.1.44 to fix a bug, but docs were not clear on how to do this. I ran the MSI installer, but it did not give me any upgrade option so I quit it. I am weary because it's a production machine with many PHP websites running on it. Also, my data directory is not the default one, it's kept on another partition. How can I upgrade it? Thanks for any help.

    Read the article

  • using JMock to write unit test for a simple spring JDBC DAO

    - by Quincy
    I'm writing an unit test for spring jdbc dao. The method to test is: public long getALong() { return simpleJdbcTemplate.queryForObject("sql query here", new RowMapper<Long>() { public Long mapRow(ResultSet resultSet, int i) throws SQLException { return resultSet.getLong("a_long"); } }); } Here is what I have in the test: public void testGetALong() throws Exception { final Long result = 1000L; context.checking(new Expectations() {{ oneOf(simpleJdbcTemplate).queryForObject("sql_query", new RowMapper<Long>() { public Long mapRow(ResultSet resultSet, int i) throws SQLException { return resultSet.getLong("a_long"); } }); will(returnValue(result)); }}); Long seq = dao.getALong(); context.assertIsSatisfied(); assertEquals(seq, result); } Naturally, the test doesn't work (otherwise, I wouldn't be asking this question here). The problem is the rowmapper in the test is different from the rowmapper in the DAO. So the expectation is not met. I tried to put with around the sql query and with(any(RowMapper.class)) for the rowmapper. It wouldn't work either, complains about "not all parameters were given explicit matchers: either all parameters must be specified by matchers or all must be specified by values, you cannot mix matchers and values"

    Read the article

  • traversal of multiple separate web services in a ring network

    - by qkrsppopcmpt
    I am facing a design problem, here is some basic requirement: Aggregator 1. Separate service for blog,video,images and associations. 2. Each of the service should be completely separate, that means they run on separate tomcat. 3. And each aggregator must be able to query local database and other aggregators 4. Traversal of services must be asynchronous using a ring network. For example, we have a ring like ws1-ws2-ws3-ws4-ws1. Each node represents one type of one aggregator. The traveral goes in this way: the query from ws1 to ws2, and ws1 is waiting for the response from ws2 asynchronously; ws2 to ws3, also ws2 wait for ws3 asynchronously. If ws3 has the data, reply to ws2 then to ws1, then reply. However if ws3 goes away, the traversal should go back to ws2, then to ws1, then go to ws4, then go to ws3 again. then tells ws4 since ws3 fails. The required technology is axis2 and tomcat 6. Does anybody have any clue about it? If it is clear, I can clarify the question more clearly. Thanks very much.

    Read the article

  • Banning by IP with php/mysql

    - by incrediman
    I want to be able to ban users by IP. My idea is to keep a list of IP's as rows in an BannedIPs table (the IP column would be an index). To check users' IP's against the table, I will keep a session variable called $_SESSION['IP'] for each session. If on any request, $_SESSION['IP'] doesn't match $_SERVER['REMOTE_ADDR'], I will update $_SESSION['IP'] and check the BannedIPs table to see if the IP is banned. (A flag will also be saved as a session variable specifying whether or not the user is banned) Here are the things I'm wondering: Does that sound like a good strategy with regards to speed and security (would someone be able to get around the IP ban somehow, other than changing IP's)? What's the best way to structure a mysql query that checks to see if a row exists? That is, what's the best way to query the db to see if a row with a certain IP exists (to check if it's banned)? Should I save the IP's as integers or strings? Note that... I estimate there will be between 1,000-10,000 banned IP's stored in the database. $_SERVER['REMOTE_ADDR'] is the IP from which the current request was sent.

    Read the article

  • Force Colloquy not to use built-in Growl notifications

    - by thepurplepixel
    Whenever Colloquy needs to pop up a notification (for example, when you are PM'd), it uses its built-in Growl notifications, which really annoy me because they stay on the screen until they are clicked (at least NOTICE's do anyways). I'd like to make Colloquy use the Growl that I have installed on my Mac, not its built-in Growl notifications. That way, I could change its preferences from the Growl .prefpane and it would match the look of all my other notifications. I seem to remember this being possible (maybe in a bug report or something), but I can't remember how. Thanks!

    Read the article

  • Double root folder vs single root folder

    - by Tomas
    On my Linux box, in bash, I have access to a "double root" folder denoted by two forward slashes: tomas:~ $ cd / tomas:/ $ ls bin/ cdrom@ ... tomas:/ $ cd // tomas:// $ ls bin/ cdrom@ ... The content of the folder and its subfolder is identical to the "normal" single slash root. The double slash does not go away when I access its subfolders. The annomaly does not repeat itself with three or more slashes; these are simple synonyms for the root: tomas:// $ cd home/tomas tomas://home/tomas $ cd /// tomas:/ $ cd //// tomas:/ $ What kindof place is it? Is it a bug? Can anyone explain the annomaly?

    Read the article

  • DjangoUnicodeDecodeError while storing pickle'd data.

    - by Jack M.
    I've got a simple dict object I'm trying to store in the database after it has been run through pickle. It seems that Django doesn't like trying to encode this error. I've checked with MySQL, and the query isn't even getting there before it is throwing the error, so I don't believe that is the problem. The dict I'm storing looks like this: { 'ordered': [ { 'value': u'First\xd1ame Last\xd1ame', 'label': u'Full Name' }, { 'value': u'123-456-7890', 'label': u'Phone Number' }, { 'value': u'[email protected]', 'label': u'Email Address' } ], 'cleaned_data': { u'Phone Number': u'123-456-7890', u'Full Name': u'First\xd1ame Last\xd1ame', u'Email Address': u'[email protected]' }, 'post_data': <QueryDict: { u'Phone Number': [u'1234567890'], u'Full Name_1': [u'Last\xd1ame'], u'Full Name_0': [u'First\xd1ame'], u'Email Address': [u'[email protected]'] }>, 'user': <User: itis> } The error that gets thrown is: 'utf8' codec can't decode bytes in position 52-53: invalid data. Position 52-53 is the first instance of \xd1 (Ñ) in the pickled data. So far, I've dug around StackOverflow and found a few questions where the database encoding for the objects was wrong. This doesn't help me because there is no MySQL query yet. This is happening before the database. Google also didn't help much when searching for unicode errors on pickled data. It is probably worth mentioning that if I don't use the Ñ, this code works fine.

    Read the article

  • How to restore a previous firefox session?

    - by jae
    FF 3.5.4 on Ubuntu 9.10. I have (of course) session saving (the built-in one) enabled. I closed the main FF window, but the "Downloads" window was still open. And on reopen, it had forgotten about the previous tabs. This is annoying is hell, and yes, I should report (or check for) a bug. If I could stomach bugzilla, that is. :P I have the sessionstore.js file with this older session (scanning it with less showed many of the sites I know had been open). How do I get FF to use this session file? I did try to remove sessionstore.* and copy the sessionstore.js (or .bak) to the profile folder. But that doesn't have any effect. EDIT: rewritten, to make it as obvious as it can be. I wasn't expecting people to jump to the "this guy's a stupid git" quite so easily.

    Read the article

  • SQL Server Search Proper Names Full Text Index vs LIKE + SOUNDEX

    - by Matthew Talbert
    I have a database of names of people that has (currently) 35 million rows. I need to know what is the best method for quickly searching these names. The current system (not designed by me), simply has the first and last name columns indexed and uses "LIKE" queries with the additional option of using SOUNDEX (though I'm not sure this is actually used much). Performance has always been a problem with this system, and so currently the searches are limited to 200 results (which still takes too long to run). So, I have a few questions: Does full text index work well for proper names? If so, what is the best way to query proper names? (CONTAINS, FREETEXT, etc) Is there some other system (like Lucene.net) that would be better? Just for reference, I'm using Fluent NHibernate for data access, so methods that work will with that will be preferred. I'm using SQL Server 2008 currently. EDIT I want to add that I'm very interested in solutions that will deal with things like commonly misspelled names, eg 'smythe', 'smith', as well as first names, eg 'tomas', 'thomas'. Query Plan |--Parallelism(Gather Streams) |--Nested Loops(Inner Join, OUTER REFERENCES:([testdb].[dbo].[Test].[Id], [Expr1004]) OPTIMIZED WITH UNORDERED PREFETCH) |--Hash Match(Inner Join, HASH:([testdb].[dbo].[Test].[Id])=([testdb].[dbo].[Test].[Id])) | |--Bitmap(HASH:([testdb].[dbo].[Test].[Id]), DEFINE:([Bitmap1003])) | | |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([testdb].[dbo].[Test].[Id])) | | |--Index Seek(OBJECT:([testdb].[dbo].[Test].[IX_Test_LastName]), SEEK:([testdb].[dbo].[Test].[LastName] >= 'WHITDþ' AND [testdb].[dbo].[Test].[LastName] < 'WHITF'), WHERE:([testdb].[dbo].[Test].[LastName] like 'WHITE%') ORDERED FORWARD) | |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([testdb].[dbo].[Test].[Id])) | |--Index Seek(OBJECT:([testdb].[dbo].[Test].[IX_Test_FirstName]), SEEK:([testdb].[dbo].[Test].[FirstName] >= 'THOMARþ' AND [testdb].[dbo].[Test].[FirstName] < 'THOMAT'), WHERE:([testdb].[dbo].[Test].[FirstName] like 'THOMAS%' AND PROBE([Bitmap1003],[testdb].[dbo].[Test].[Id],N'[IN ROW]')) ORDERED FORWARD) |--Clustered Index Seek(OBJECT:([testdb].[dbo].[Test].[PK__TEST__3214EC073B95D2F1]), SEEK:([testdb].[dbo].[Test].[Id]=[testdb].[dbo].[Test].[Id]) LOOKUP ORDERED FORWARD) SQL for above: SELECT * FROM testdb.dbo.Test WHERE LastName LIKE 'WHITE%' AND FirstName LIKE 'THOMAS%' Based on advice from Mitch, I created an index like this: CREATE INDEX IX_Test_Name_DOB ON Test (LastName ASC, FirstName ASC, BirthDate ASC) INCLUDE (and here I list the other columns) My searches are now incredibly fast for my typical search (last, first, and birth date).

    Read the article

  • Can you notice what's wrong with my PHP or MYSQL code?

    - by Jenna
    I am trying to create a category menu with sub categories. I have the following MySQL table: -- -- Table structure for table `categories` -- CREATE TABLE IF NOT EXISTS `categories` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(1000) NOT NULL, `slug` varchar(1000) NOT NULL, `parent` int(11) NOT NULL, `type` varchar(255) NOT NULL, PRIMARY KEY (`ID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=66 ; -- -- Dumping data for table `categories` -- INSERT INTO `categories` (`ID`, `name`, `slug`, `parent`, `type`) VALUES (63, 'Party', '/category/party/', 0, ''), (62, 'Kitchen', '/category/kitchen/', 61, 'sub'), (59, 'Animals', '/category/animals/', 0, ''), (64, 'Pets', '/category/pets/', 59, 'sub'), (61, 'Rooms', '/category/rooms/', 0, ''), (65, 'Zoo Creatures', '/category/zoo-creatures/', 59, 'sub'); And the following PHP: <?php include("connect.php"); echo "<ul>"; $query = mysql_query("SELECT * FROM categories"); while ($row = mysql_fetch_assoc($query)) { $catId = $row['id']; $catName = $row['name']; $catSlug = $row['slug']; $parent = $row['parent']; $type = $row['type']; if ($type == "sub") { $select = mysql_query("SELECT name FROM categories WHERE ID = $parent"); while ($row = mysql_fetch_assoc($select)) { $parentName = $row['name']; } echo "<li>$parentName >> $catName</li>"; } else if ($type == "") { echo "<li>$catName</li>"; } } echo "</ul>"; ?> Now Here's the Problem, It displays this: * Party * Rooms >> Kitchen * Animals * Animals >> Pets * Rooms * Animals >> Zoo Creatures I want it to display this: * Party * Rooms >> Kitchen * Animals >> Pets >> Zoo Creatures Is there something wrong with my loop? I just can't figure it out.

    Read the article

  • GAE modeling relationship options

    - by Sway
    Hi there, I need to model the following situation and I can't seem to find a consistent example on how to do it "correctly" for the google app engine. Suppose I've got a simple situation like the following: [Company] 1 ----- M [Stare] A company has one to many stores. Each store has an address made up of a address line 1, city, state, country, postcode etc. Ok. Lets say we need to create say an "Audit". An Audit is for a company and can be across one to many stares. So something like: [Audit] 1 ------ 1 [Company] 1 ------ M [Store] Now we need to query all of the "audits" based on the Store "addresses" in order to send the "Auditors" to the right locations. There seem to be numerous articles like this one: http://code.google.com/appengine/articles/modeling.html Which give examples of creating a "ContactCompany" model class. However they also say that you should use this kind of relationship only when you "really need to" and with "care" for performance. I've also read - frequently - that you should denormalize as much as possible thereby moving all of the "query-able" data into the Audit class. So what would you suggest as the best way to solve this? I've seen that there is an Expando class but I'm not sure if that is the "best" option for this. Any help or thoughts on this would be totally appreciated. Thanks in advance, Matt

    Read the article

  • Twitter Typeahead only shows only 5 results

    - by user3685388
    I'm using the Twitter Typeahead version 0.10.2 autocomplete but I'm only receiving 5 results from my JSON result set. I can have 20 or more results but only 5 are shown. What am I doing wrong? var engine = new Bloodhound({ name: "blackboard-names", prefetch: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false } }, remote: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false }, }, datumTokenizer: Bloodhound.tokenizers.obj.whitespace('value'), queryTokenizer: Bloodhound.tokenizers.whitespace }); var promise = engine.initialize(); promise .done(function() { console.log("done"); }) .fail(function() { console.log("fail"); }); $("#Impersonate").typeahead({ minLength: 2, highlight: true}, { name: "blackboard-names", displayKey: 'value', source: engine.ttAdapter() }).bind("typeahead:selected", function(obj, datum, name) { console.log(obj, datum, name); alert(datum.id); }); Data: [ { "id": "1", "value": "Adams, Abigail", "tokens": [ "Adams", "A", "Ad", "Ada", "Abigail", "A", "Ab", "Abi" ] }, { "id": "2", "value": "Adams, Alan", "tokens": [ "Adams", "A", "Ad", "Ada", "Alan", "A", "Al", "Ala" ] }, { "id": "3", "value": "Adams, Alison", "tokens": [ "Adams", "A", "Ad", "Ada", "Alison", "A", "Al", "Ali" ] }, { "id": "4", "value": "Adams, Amber", "tokens": [ "Adams", "A", "Ad", "Ada", "Amber", "A", "Am", "Amb" ] }, { "id": "5", "value": "Adams, Amelia", "tokens": [ "Adams", "A", "Ad", "Ada", "Amelia", "A", "Am", "Ame" ] }, { "id": "6", "value": "Adams, Arik", "tokens": [ "Adams", "A", "Ad", "Ada", "Arik", "A", "Ar", "Ari" ] }, { "id": "7", "value": "Adams, Ashele", "tokens": [ "Adams", "A", "Ad", "Ada", "Ashele", "A", "As", "Ash" ] }, { "id": "8", "value": "Adams, Brady", "tokens": [ "Adams", "A", "Ad", "Ada", "Brady", "B", "Br", "Bra" ] }, { "id": "9", "value": "Adams, Brandon", "tokens": [ "Adams", "A", "Ad", "Ada", "Brandon", "B", "Br", "Bra" ] } ]

    Read the article

  • Converting an input text value to a decimal number

    - by vitto
    Hi, I'm trying to work with decimal data in my PHP and MySql practice and I'm not sure about how can I do for an acceptable level af accuracy. I've wrote a simple function which recives my input text value and converts it to a decimal number ready to be stored in the database. <?php function unit ($value, $decimal_point = 2) { return number_format (str_replace (",", ".", strip_tags (trim ($value))), $decimal_point); } ?> I've resolved something like AbdlBsF5%?nl with some jQuery code for replace and some regex to keep only numbers, dots and commas. In some country, people uses the comma , to send decimal numbers, so a number like 72.08 is wrote like 72,08. I'd like avoid to forcing people to change their usual chars and I've decided to use a jQuery to keep this too. Now every web developer knows the last validation must be handled by the dynamic page for security reasons. So my answer is should I use something like unit (); function to store data or shoul I also check if users inserts invalid chars like letters or something else? If I try this and send letters, the query works without save the invalid data, I think this isn't bad, but I could easily be wrong because I'm a rookie. What kind of method should I use if I want a number like 99999.99 for my query?

    Read the article

  • Setting Parameters from another Parameter In SSRS

    - by Mike
    I was able to get this working is SSRS 2008, but do to the fact that my company only has 2005 servers I need to downgrade the report to 2005. The idea is for a given person name there are two key fields EntityType and EntityId So I have a parameter from a dataset where the Label is the Name and the value is EntityType_EntityId I use the split function to take the left and right sides of from _ In 2008, I set the query parameters of the dataset to the split function and it works In 2005, I set the Default Value of each Report Parameters Now when I run the report and put textboxes showing the value of the parameters, the values are shown correctly but the query does not run. I am guessing that this is a lifecycle issue being Get Name Parameter Run Report THEN Set Parameters = Split of Name But the problem with that is the second time I run the report I should get result and I do not. Does anyone know what I am doing wrong. I guess I can pass in the underscore delimted string to the stored procedure and parse it there, but my question is can this be done in the report? Reason being other callers will pass in the parameters as two seperate values.

    Read the article

  • I would like to prevent these entries from being added to the eventlog.

    - by David Smith
    Our client's application EventLog is getting filled up with warnings due to a bug in the Microsoft SQL Server report viewer control, http://support.microsoft.com/kb/973219. They have thousands of users running reports so this is making their eventlog hard to use and they want them removed on a frequent basis. I tried using PowerShell to remove the events, but that does not seem possible. Is there a way to prevent these entries from being written to the event log in the first place? I'm thinking I would like to filter out events where event source="ASP.NET 2.0.50727.0", eventId ="1309" and Message contains "Reserved.ReportViewerWebControl.axd"

    Read the article

  • MySQL cluster: Error after 1 data node is shutdown and started again

    - by nitins
    MySQL cluster: Error after 1 data node is shutdown and started again. We have configured MySQL cluster(version 7.1) with 2 sql/data nodes. We are using table space instead of in-memory clustering. The setup was working fine. So to test the setup I shutdown one data node, updated a table and and again stated the stopped node. Its giving this error and not starting. Any ideas ? Forced node shutdown completed. Occured during startphase 5. Caused by error 2306: 'Pointer too large(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'.

    Read the article

  • Designing a fluent Javascript interface to abstract away the asynchronous nature of AJAX

    - by Anurag
    How would I design an API to hide the asynchronous nature of AJAX and HTTP requests, or basically delay it to provide a fluid interface. To show an example from Twitter's new Anywhere API: // get @ded's first 20 statuses, filter only the tweets that // mention photography, and render each into an HTML element T.User.find('ded').timeline().first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); function filterer(status) { return status.text.match(/photography/); } vs this (asynchronous nature of each call is clearly visible) T.User.find('ded', function(user) { user.timeline(function(statuses) { statuses.first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); }); }); It finds the user, gets their tweet timeline, filters only the first 20 tweets, applies a custom filter, and ultimately uses the callback function to process each tweet. I am guessing that a well designed API like this should work like a query builder (think ORMs) where each function call builds the query (HTTP URL in this case), until it hits a looping function such as each/map/etc., the HTTP call is made and the passed in function becomes the callback. An easy development route would be to make each AJAX call synchronous, but that's probably not the best solution. I am interested in figuring out a way to make it asynchronous, and still hide the asynchronous nature of AJAX.

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >