Search Results

Search found 34207 results on 1369 pages for 'query output'.

Page 463/1369 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • Optimal two variable linear regression SQL statement

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 5 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; and insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here: Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • MySQL FULLTEXT not working

    - by Ross
    I'm attempting to add searching support for my PHP web app using MySQL's FULLTEXT indexes. I created a test table (using the MyISAM type, with a single text field a) and entered some sample data. Now if I'm right the following query should return both those rows: SELECT * FROM test WHERE MATCH(a) AGAINST('databases') However it returns none. I've done a bit of research and I'm doing everything right as far as I can tell - the table is a MyISAM table, the FULLTEXT indexes are set. I've tried running the query from the prompt and from phpMyAdmin, with no luck. Am I missing something crucial? UPDATE: Ok, while Cody's solution worked in my test case it doesn't seem to work on my actual table: CREATE TABLE IF NOT EXISTS `uploads` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` text NOT NULL, `size` int(11) NOT NULL, `type` text NOT NULL, `alias` text NOT NULL, `md5sum` text NOT NULL, `uploaded` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=6 ; And the data I'm using: INSERT INTO `uploads` (`id`, `name`, `size`, `type`, `alias`, `md5sum`, `uploaded`) VALUES (1, '04 Sickman.mp3', 5261182, 'audio/mp3', '1', 'df2eb6a360fbfa8e0c9893aadc2289de', '2009-07-14 16:08:02'), (2, '07 Dirt.mp3', 5056435, 'audio/mp3', '2', 'edcb873a75c94b5d0368681e4bd9ca41', '2009-07-14 16:08:08'), (3, 'header_bg2.png', 16765, 'image/png', '3', '5bc5cb5c45c7fa329dc881a8476a2af6', '2009-07-14 16:08:30'), (4, 'page_top_right2.png', 5299, 'image/png', '4', '53ea39f826b7c7aeba11060c0d8f4e81', '2009-07-14 16:08:37'), (5, 'todo.txt', 392, 'text/plain', '5', '7ee46db77d1b98b145c9a95444d8dc67', '2009-07-14 16:08:46'); The query I'm now running is: SELECT * FROM `uploads` WHERE MATCH(name) AGAINST ('header' IN BOOLEAN MODE) Which should return row 3, header_bg2.png. Instead I get another empty result set. My options for boolean searching are below: mysql> show variables like 'ft_%'; +--------------------------+----------------+ | Variable_name | Value | +--------------------------+----------------+ | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | +--------------------------+----------------+ 5 rows in set (0.02 sec) "header" is within the word length restrictions and I doubt it's a stop word (I'm not sure how to get the list). Any ideas?

    Read the article

  • .NET 4.0 Implementing OutputCacheProvider

    - by azamsharp
    I am checking out the OutputCacheProvider in ASP.NET 4.0 and using it to store my output cache into the MongoDb database. I am not able to understand the purpose of Add method which is one of the override methods for OutputCacheProvider. The Add method is invoked when you have VaryByParam set to something. So, if I have VaryByParam = "id" then the Add method will be invoked. But after the Add the Set is also invoked and I can insert into the MongoDb database inside the Set method. public override void Set(string key, object entry, DateTime utcExpiry) { // if there is something in the query and use the path and query to generate the key var url = HttpContext.Current.Request.Url; if (!String.IsNullOrEmpty(url.Query)) { key = url.PathAndQuery; } Debug.WriteLine("Set(" + key + "," + entry + "," + utcExpiry + ")"); _service.Set(new CacheItem() { Key = MD5(key), Item = entry, Expires = utcExpiry }); } Inside the Set method I use the PathAndQuery to get the params of the QueryString and then do a MD5 on the key and save it into the MongoDb database. It seems like the Add method will be useful if I am doing something like VaryByParam = "custom" or something. Can anyone shed some light on the Add method of OutputCacheProvider?

    Read the article

  • MSBuild script fails but produces no errors

    - by Kate
    I have a MSBuild script that I am executing through TeamCity. One of the tasks that is runs is from Xheo DeploxLX CodeVeil which obfuscates some DLLs. The task I am using is called VeilProject. I have run the CodeVeil Project through the interface manually and it works correctly, so I think I can safely assume that the actual obfuscate process is ok. This task used to take around 40 minutes and the rest of the MSBuild file executed perfectly and finished without errors. For some reason this task is now taking 1hr 20 minutes or so to execute. Once the VeilProject task is finished the output from the task says it completely successfully, however the MSBuild script fails at this point. I have a task directly after the VeilProject task and it does not get outputted. Using diagnostic output from MSBUild I can see the following: My questions are: Would it be possible that the MSBuild script has timed out? Once the task has completed it is after a certain timeout period so it just fails? Why would the build fail with no errors and no warnings? [05:39:06]: [Target "Obfuscate"] Finished. [05:39:06]: [Target "Obfuscate"] Saving exception map [05:49:21]: [Target "Obfuscate"] Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds [05:49:22]: [Target "Obfuscate"] Done. [05:49:51]: MSBuild output: Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds (TaskId:8) Done. (TaskId:8) Done executing task "VeilProject" -- FAILED. (TaskId:8) Done building target "Obfuscate" in project "AMK_Release.proj.teamcity.patch.tcprojx" -- FAILED.: (TargetId:12) Done Building Project "C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx" (All target(s)) -- FAILED. Project Performance Summary: 6535484 ms C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx 1 calls 6535484 ms All 1 calls Target Performance Summary: 156 ms PreClean 1 calls 266 ms SetBuildVersionNumber 1 calls 2406 ms CopyFiles 1 calls 6532391 ms Obfuscate 1 calls Task Performance Summary: 16 ms MakeDir 2 calls 31 ms TeamCitySetBuildNumber 1 calls 31 ms Message 1 calls 62 ms RemoveDir 2 calls 234 ms GetAssemblyIdentity 1 calls 2406 ms Copy 1 calls 6528047 ms VeilProject 1 calls Build FAILED. 0 Warning(s) 0 Error(s) Time Elapsed 01:48:57.46 [05:49:52]: Process exit code: 1 [05:49:55]: Build finished

    Read the article

  • bing search api ajax does not work

    - by jhon
    Hi guys, I want to use the Bing's search api with javascript. Actually, I want the user to write something and query Bing in order to get just images. so, I tried it using ajax. If I try the url http://api.search.live.net/xml.aspx?Appid=[YOURAPIKEY]&sources=image&query=home directly (with the browser) I do get an xml document. but if I use XMLHttpRequest it does not work. <html> <body> <script> var xhr = new XMLHttpRequest(); var url="http://api.search.live.net/xml.aspx?Appid=[YOURAPIKEY]&sources=image&query=home" xhr.open("GET", url, true ); xhr.onreadystatechange=function(){ /*if( xhr.readyState == 4 && xhr.status == 200) { document.write( xhr.responseText ); }*/ alert( xhr.readyState +" "+xhr.status +xhr.statusText +xhr); }; xhr.send(null); </script> </body> </html> Questions: 1) why does the code from above does not work? 2) any other way to do this without XMLHttpRequest? thanks. btw. I'm just interested in fix this for Firefox and without external libraries (jquery and so on).

    Read the article

  • WriteableBitmap failing badly, pixel array very inaccurate

    - by dawmail333
    I have tried, literally for hours, and I have not been able to budge this problem. I have a UserControl, that is 800x369, and it contains, simply, a path that forms a worldmap. I put this on a landscape page, then I render it into a WriteableBitmap. I then run a conversion to turn the 1d Pixels array into a 2d array of integers. Then, to check the conversion, I wire up the custom control's click command to use the Point.X and Point.Y relative to the custom control in the newly created array. My logic is thus: wb = new WriteableBitmap(worldMap, new TranslateTransform()); wb.Invalidate(); intTest = wb.Pixels.To2DArray(wb.PixelWidth); My conversion logic is as such: public static int[,] To2DArray(this int[] arr,int rowLength) { int[,] output = new int[rowLength, arr.Length / rowLength]; if (arr.Length % rowLength != 0) throw new IndexOutOfRangeException(); for (int i = 0; i < arr.Length; i++) { output[i % rowLength, i / rowLength] = arr[i]; } return output; } Now, when I do the checking, I get completely and utterly strange results: apparently all pixels are either at values of -1 or 0, and these values are completely independent of the original colours. Just for posterity: here's my checking code: private void Check(object sender, MouseButtonEventArgs e) { Point click = e.GetPosition(worldMap); ChangeNotification(intTest[(int)click.X,(int)click.Y].ToString()); } The result show absolutely no correlation to the path that the WriteableBitmap has rendered into it. The path has a fill of solid white. What the heck is going on? I've tried for hours with no luck. Please, this is the major problem stopping me from submitting my first WP7 app. Any guidance?

    Read the article

  • How to define custom path to Interop *.dll

    - by NoviceAndNovice
    Well, I have an ActiveX (*.ocx) component, and i use it in a managed C++/CLI project: write a managed wrapper around ActiveX component[ NET has a great Interop services : provides me genarated dll so i can easily use it in my managed code] The problem is that Visual Studio (2008) automatically copy the generated Interop *.dll to the directory where my *.exe file stay.But i want put all my genarated Interop *.dll to a folder ... Suppose My directory structure is so: D:\MyProject\Output\MyProject.exe //My mamanged exe D:\MyProject\Output\Interop.XXXLib.1.0.dll // *Interop .dll I want to put Interop.XXXLib.1.0.dll into new folder D:\MyProject\Output\Interops and use it from that directory...How Can i do it? Best Wishes PS: What I found so far was using using codeBase/ probing tags in my app.config file such as <?xml version="1.0"?> <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com.asm.v1"> <probing privatePath="Interops" /> </assemblyBinding> </runtime> </configuration> But i did not work in C++/CLI

    Read the article

  • Regular expression either/or not matching everything

    - by dwatransit
    I'm trying to parse an HTTP GET request to determine if the url contains any of a number of file types. If it does, I want to capture the entire request. There is something I don't understand about ORing. The following regular expression only captures part of it, and only if .flv is the first int the list of ORd values. (I've obscured the urls with spaces because Stackoverflow limits hyperlinks) regex: GET.?(.flv)|(.mp4)|(.avi).? test text: GET http: // foo.server.com/download/0/37/3000016511/.flv?mt=video/xy match output: GET http: // foo.server.com/download/0/37/3000016511/.flv I don't understand why the .*? at the end of the regex isnt callowing it to capture the entire text. If I get rid of the ORing of file types, then it works. Here is the test code in case my explanation doesn't make sense: public static void main(String[] args) { // TODO Auto-generated method stub String sourcestring = "GET http: // foo.server.com/download/0/37/3000016511/.flv?mt=video/xy"; Pattern re = Pattern.compile("GET .?\.flv."); // this works //output: // [0][0] = GET http :// foo.server.com/download/0/37/3000016511/.flv?mt=video/xy // the match from the following ends with the ".flv", not the entire url. // also it only works if .flv is the first of the 3 ORd options //Pattern re = Pattern.compile("GET .?(\.flv)|(\.mp4)|(\.avi).?"); // output: //[0][0] = GET http: // foo.server.com/download/0/37/3000016511/.flv // [0][1] = .flv // [0][2] = null // [0][3] = null Matcher m = re.matcher(sourcestring); int mIdx = 0; while (m.find()){ for( int groupIdx = 0; groupIdx < m.groupCount()+1; groupIdx++ ){ System.out.println( "[" + mIdx + "][" + groupIdx + "] = " + m.group(groupIdx)); } mIdx++; } } }

    Read the article

  • Is there a standard for storing normalized phone numbers in a database?

    - by Eric Z Beard
    What is a good data structure for storing phone numbers in database fields? I'm looking for something that is flexible enough to handle international numbers, and also something that allows the various parts of the number to be queried efficiently. [Edit] Just to clarify the use case here: I currently store numbers in a single varchar field, and I leave them just as the customer entered them. Then, when the number is needed by code, I normalize it. The problem is that if I want to query a few million rows to find matching phone numbers, it involves a function, like where dbo.f_normalizenum(num1) = dbo.f_normalizenum(num2) which is terribly inefficient. Also queries that are looking for things like the area code become extremely tricky when it's just a single varchar field. [Edit] People have made lots of good suggestions here, thanks! As an update, here is what I'm doing now: I still store numbers exactly as they were entered, in a varchar field, but instead of normalizing things at query time, I have a trigger that does all that work as records are inserted or updated. So I have ints or bigints for any parts that I need to query, and those fields are indexed to make queries run faster.

    Read the article

  • Opening a Unicode file with Perl

    - by Jaco Pretorius
    I'm using osql to run several sql scripts against a database and then I need to look at the results file to check if any errors occurred. The problem is that perl doesn't seem to like the fact that the results files are unicode. I wrote a little test script to test it and the output comes out all warbled. $file = shift; open OUTPUT, $file or die "Can't open $file: $!\n"; while (<OUTPUT>) { print $_; if (/Invalid|invalid|Cannot|cannot/) { push(@invalids, $file); print "invalid file - $inputfile - schedule for retry\n"; last; } } Any ideas? I've tried decoding using decode_utf8 but it makes no difference. I've also tried to set the encoding when opening the file. I think the problem might be that osql puts the result file in UTF-16 format, but I'm not sure. When I open the file in textpad it just tells me 'Unicode'. Edit: Using perl v5.8.8

    Read the article

  • Counting computers for each lab

    - by Irvin
    Alright I have a problem with having to count PCs, and Macs from different labs. In each lab I need to display how many PC and Macs there is available. The data is coming from a SQL server, right am trying sub queries and the use of union, this the closest I can get to what I need. The query below shows me the number of PCs, and Macs in two different columns, but of course, the PCs will be in one row and the Macs on another right below it. Having the lab come up twice. EX: LabName -- PC / MAC Lab1 -- 5 / 0 Lab1 -- 0 / 2 Query SELECT Labs.LabName, COUNT(*),0 AS Mac FROM HardWare INNER JOIN Labs ON HardWare.LabID = Labs.LabID WHERE ComputerStatus = 'AVAILABLE' GROUP BY Labs.LabName UNION SELECT Labs.LabName, COUNT(*), (SELECT COUNT(Manufacturer)) AS Mac FROM HardWare INNER JOIN Labs ON HardWare.LabID = Labs.LabID WHERE ComputerStatus = 'AVAILABLE' AND Manufacturer = 'Apple' GROUP BY Labs.LabName ORDER BY Labs.LabName So is there any way to get them together in one row as in Lab1 -- 5 / 2 or is there a different way to write the query? anything will be a big help, am pretty much stuck here. Cheers

    Read the article

  • Why i cant save a long text on my MySQL database?

    - by DomingoSL
    im trying to save to my data base a long text (about 2500 chars) input by my users using a web form and passed to the server using php. When i look in phpmyadmin, the text gets crop. How can i config my table in order to get the complete text? This is my table config: CREATE TABLE `extra_879` ( `id` bigint(20) NOT NULL auto_increment, `id_user` bigint(20) NOT NULL, `title` varchar(300) NOT NULL, `content` varchar(3000) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `id_user` (`id_user`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; Take a look of the field content that have a limit of 3000 chars, but the texts always gets crop at 690 chars. Thanks for any help! EDIT: I found the problem but i dont know how to solve it. The query is getting crop always in the same char, an special char: ù EDIT 2: This is the cropped query: INSERT INTO extra_879 (id,id_user,title,content) VALUES (NULL,'1','Informazione Extra',' Riconoscimenti Laurea di ingegneria presa a le 22 anni e in il terso posto della promozione Diploma analista di sistemi ottenuto il rating massimo 20/20, primo posto della promozione. Borsa di Studio (offerta dal Ministero Esteri Italiano) vinta nel 2010 (Valutazione del territorio attraverso le nueve tecnologie) Pubblicazione di paper; Stima del RCS della nave CCGS radar sulla base dei risultati di H. Leong e H. Wilson. http://www.ing.uc.edu.vek-azozayalarchivospdf/PAPER-Sarmiento.pdf Tesi di laurea: PROGETTAZIONE E REALIZZAZIONE DI UN SIS-TEMA DI TELEMETRIA GSM PER IL CONTROLLO DELLO STATO DI TRANSITO VEICOLARE E CLIMA (ottenuto il punteggio pi') It gets crop just when the (ottenuto il punteggio più alto) phrase, just when ù appear... EDIT 3: I using jquery + ajax to send the query $.ajax({type: "POST", url: "handler.php", data: "e_text="+ $('#e_text').val() + "&e_title="+ $('#extra_title').val(),

    Read the article

  • HATEOAS - Discovery and URI Templating

    - by Paul Kirby
    I'm designing a HATEOAS API for internal data at my company, but have been having troubles with the discovery of links. Consider the following set of steps for someone to retrieve information about a specific employee in this system: User sends GET to http://coredata/ to get all available resources, returns a number of links including one tagged as rel = "http://coredata/rels/employees" User follows HREF on the rel from the first request, performing a GET at (for example) http://coredata/employees The data returned from this last call is my conundrum and a situation where I've heard mixed suggestions. Here are some of them: That GET will return all employees (with perhaps truncated data), and the client would be responsible for picking the one it wants from that list. That GET would return a number of URI templated links describing how to query / get one employee / get all employees. Something like: "_links": { "http://coredata/rels/employees#RetrieveOne": { "href": "http://coredata/employees/{id}" }, "http://coredata/rels/employees#Query": { "href": "http://coredata/employees{?login,firstName,lastName}" }, "http://coredata/rels/employees#All": { "href": "http://coredata/employees/all" } } I'm a little stuck here with what remains closest to HATEOAS. For option 1, I really do not want to make my clients retrieve all employees every time for the sake of navigation, but I can see how using URI templating in example two introduces some out-of-band knowledge. My other thought was to use the RetrieveOne, Query, and All operations as my cool URLs, but that seems to violate the concept that you should be able to navigate to the resources you want from one base URI. Has anyone else managed to come up with a good way to handle this? Navigation is dead simple once you've retrieved one resource or a set of resources, but it seems very difficult to use for discovery.

    Read the article

  • Actual element tags are not getting captured.

    - by user323719
    I am using the below piece of XSL code to construct a span tag calling a javascript function on mouseover. The input to the javascipt should be a html table. The output from the variable "showContent" gives just the text content but not along with the table tags. How can this be resolved. XSL: <xsl:variable name="aTable" as="element()*"> <table border="0" cellspacing="0" cellpadding="0"> <xsl:for-each select="$capturedTags"> <tr><td><xsl:value-of select="node()" /></td></tr> </xsl:for-each> </table> </xsl:variable> <xsl:variable name="start" select='concat("Tip(&#39;", "")'></xsl:variable> <xsl:variable name="end" select='concat("&#39;)", "")'></xsl:variable> <xsl:variable name="showContent"> <xsl:value-of select='concat($start,$aTable,$end)'/> </xsl:variable> <span xmlns="http://www.w3.org/1999/xhtml" onmouseout="{$hideContent}" onmouseover="{$showContent}" id="{$textNodeId}"><xsl:value-of select="$textNode"></xsl:value-of></span> Actual Output: <span onmouseout="UnTip()" onmouseover="Tip('content1')" id="d1t14"is my </span Expected output: <span onmouseout="UnTip()" onmouseover="Tip('<table><tr><td>content1</td></tr>')" id="d1t14">is my </span> What is the change that needs to done in the above XSL for the table, tr and td tags to get passed?

    Read the article

  • Using Partitions for a large MySQL table

    - by user293594
    An update on my attempts to implement a 505,000,000-row table on MySQL on my MacBook Pro: Following the advice given, I have partitioned my table, tr: i UNSIGNED INT NOT NULL, j UNSIGNED INT NOT NULL, A FLOAT(12,8) NOT NULL, nu BIGINT NOT NULL, KEY (nu), key (A) with a range on nu. nu ought to be a real number, but because I only have 6-d.p. accuracy and the maximum value of nu is 30000. I multiplied it by 10^8 made it a BIGINT - I gather one can't use FLOAT or DOUBLE values to PARTITION a MySQL table. Anyway, I have 15 partitions (p0: nu<25,000,000,000, p1: nu<50,000,000,000, etc.). I was thinking that this should speed up a typical to SELECT: SELECT * FROM tr WHERE nu>95000000000 AND nu<100000000000 AND A.>1. to something of the order of the same query on a table consisting of only the data in the relevant partition (<30 secs). But it's taking 30mins+ to return rows for queries within a partition and double that if the query is for rows spanning two (contiguous) partitions. I realise I could just have 15 different tables, and query them separately, but is there a way to do this 'automatically' with partitions? Has anyone got any suggestions?

    Read the article

  • i am getting error like mysql_connect() acces denied for system@localhost(using password NO)

    - by user309381
    class MySQLDatabase { public $connection; function _construct() { $this->open_connection();} public function open_connection() {$this->connection = mysql_connect(DB_SERVER,DB_USER,DB_PASS); if(!$this->connection){die("Database Connection Failed" . mysql_error());} else{$db_select = mysql_select_db(DB_NAME,$this->connection); if(!$db_select){die("Database Selection Failed" . mysql_error()); } }} public function close_connection({ if(isset($this->connection)){ mysql_close($this->connection); unset($this->connection);}} public function query(/*$sql*/){ $sql = "SELECT*FROM users where id = 1"; $result = mysql_query($sql); $this->confirm_query($result); //return $result;while( $found_user = mysql_fetch_assoc($result)) { echo $found_user ['username']; } } private function confirm_query($result) { if(!$result) { die("The Query has problem" . mysql_error()); } } } $database = new MySQLDatabase(); $database->open_connection(); $database->query(); $database->close_connection(); ?>

    Read the article

  • Extracting a .app from a zip file in Python, using ZipFile

    - by Yakattak
    I'm trying to extract new revisions of Chromium.app from their snapshots, and I can download the file fine, but when it comes to extracting it, ZipFile either extracts the chrome-mac folder within as a file, says that directories don't exist, etc. I am very new to python, so these errors make little sense to me. Here is what I have so far. import urllib2 response = urllib2.urlopen('http://build.chromium.org/buildbot/snapshots/chromium-rel-mac/LATEST') latestRev = response.read() print latestRev # we have the revision, now we need to download the zip and extract it latestZip = urllib2.urlopen('http://build.chromium.org/buildbot/snapshots/chromium-rel-mac/%i/chrome-mac.zip' % (int(latestRev)), '~/Desktop/ChromiumUpdate/%i-update' % (int(latestRev))) #declare some vars that hold paths n shit workingDir = '/Users/slehan/Desktop/ChromiumUpdate/' chromiumZipPath = '%s%i-update.zip' % (workingDir, (int(latestRev))) chromiumAppPath = 'chrome-mac/' #the path of the chromium executable within the zip file chromiumAppExtracted = '%s/Chromium.app' % (workingDir) # path of the extracted executable output = open(chromiumZipPath, 'w') #delete any current file there output.write(latestZip.read()) output.close() # we have the .zip now we need to extract the Chromium.app file, it's in ziproot/chrome-mac/Chromium.app import zipfile, os zippedFile = open(chromiumZipPath) zippedChromium = zipfile.ZipFile(zippedFile, 'r') zippedChromium.extract(chromiumAppPath, workingDir) #print zippedChromium.namelist() zippedChromium.close() #zippedChromium.close() Any ideas?

    Read the article

  • Generalizing Fibonacci sequence with SICStus Prolog

    - by Christophe Herreman
    I'm trying to find a solution for a query on a generalized Fibonacci sequence (GFS). The query is: are there any GFS that have 885 as their 12th number? The initial 2 numbers may be restricted between 1 and 10. I already found the solution to find the Nth number in a sequence that starts at (1, 1) in which I explicitly define the initial numbers. Here is what I have for this: fib(1, 1). fib(2, 1). fib(N, X) :- N #> 1, Nmin1 #= N - 1, Nmin2 #= N - 2, fib(Nmin1, Xmin1), fib(Nmin2, Xmin2), X #= Xmin1 + Xmin2. For the query mentioned I thought the following would do the trick, in which I reuse the fib method without defining the initial numbers explicitly since this now needs to be done dynamically: fib2 :- X1 in 1..10, X2 in 1..10, fib(1, X1), fib(2, X2), fib(12, 885). ... but this does not seem to work. Is it not possible this way to define the initial numbers, or am I doing something terribly wrong? I'm not asking for the solution, but any advice that could help me solve this would be greatly appreciated.

    Read the article

  • Binary files printing and desired precision

    - by yCalleecharan
    Hi, I'm printing a variable say z1 which is a 1-D array containing floating point numbers to a text file so that I can import into Matlab or GNUPlot for plotting. I've heard that binary files (.dat) are smaller than .txt files. The definition that I currently use for printing to a .txt file is: void create_out_file(const char *file_name, const long double *z1, size_t z_size){ FILE *out; size_t i; if((out = _fsopen(file_name, "w+", _SH_DENYWR)) == NULL){ fprintf(stderr, "***> Open error on output file %s", file_name); exit(-1); } for(i = 0; i < z_size; i++) fprintf(out, "%.16Le\n", z1[i]); fclose(out); } I have three questions: Are binary files really more compact than text files?; If yes, I would like to know how to modify the above code so that I can print the values of the array z1 to a binary file. I've read that fprintf has to be replaced with fwrite. My output file say dodo.dat should contain the values of array z1 with one floating number per line. I have %.16Le up in my code but I think that %.15Le is right as I have 15 precision digits with long double. I have put a dot (.) in the width position as I believe that this allows expansion to an arbitrary field to hold the desired number. Am I right? As an example with %.16Le, I can have an output like 1.0047914240730432e-002 which gives me 16 precision digits and the width of the field has the right width to display the number correctly. Is placing a dot (.) in the width position instead of a width value a good practice? Thanks a lot...

    Read the article

  • facebook javascript api

    - by ngreenwood6
    I am trying to get my status from facebook using the javascript api. I have the following code: <div id="fb-root"></div> <div id="data"></div> <script src="http://connect.facebook.net/en_US/all.js"></script> <script type="text/javascript"> (function(){ FB.init({ appId : 'SOME ID', status : true, // check login status cookie : true, // enable cookies to allow the server to access the session xfbml : true // parse XFBML }); }); getData(); function getData(){ var query = FB.Data.query('SELECT message FROM status WHERE uid=12345 LIMIT 10'); query.wait(function(rows) { for(i=0;i<rows.length;i++){ document.getElementById('data').innerHTML += 'Your status is ' + rows[i].message + '<br />'; } }); } </script> When i try to get my name it works fine but the status is not working. Can someone please tell me what I am doing wrong because the documentation for this is horrible. And yes I replaced the uid with a fake one and yes i put in my app id because like i said when i try to get my name it works fine. Any help is appreciated.

    Read the article

  • Moving .NET assemblies away from the application base directory?

    - by RasmusKL
    I have a WinForms application with a bunch of third party references. This makes the output folder quite messy. I'd like to place the compiled / referenced dlls into a common subdirectory in the output folder, bin / lib - whatever - and just have the executables (+ needed configs etc) reside in the output folder. After some searching I ran into assembly probing (http://msdn.microsoft.com/en-us/library/4191fzwb.aspx) - and verified that if I set this up and manually move the assemblies my application will still work if they are stored in the designated subdirectory like so: <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="bin" /> </assemblyBinding> </runtime> </configuration> However, this doesn't solve the build part - is there any way to specify where referenced assemblies and compiled library assemblies go? Only solutions I can think of off the top of my head is either post-build actions or dropping the idea and using ILMerge or something. There has got to be a better way of defining the structure :-)

    Read the article

  • How do I view the full content of a text or varchar(MAX) column in SQL Server 2008 Management Studio

    - by adamjford
    In this live SQL Server 2008 (build 10.0.1600) database, there's an Events table, which contains a text column named Details. (Yes, I realize this should actually be a varchar(MAX) column, but whoever set this database up did not do it that way.) This column contains very large logs of exceptions and associated JSON data that I'm trying to access through SQL Server Management Studio, but whenever I copy the results from the grid to a text editor, it truncates it at 43679 characters. I've read on various locations on the Internet that you can set your Maximum Characters Retrieved for XML Data in Tools > Options > Query Results > SQL Server > Results To Grid to Unlimited, and then perform a query such as this: select Convert(xml, Details) from Events where EventID = 13920 (Note that the data is column is not XML at all. CONVERTing the column to XML is merely a workaround I found from Googling that someone else has used to get around the limit SSMS has from retrieving data from a text or varchar(MAX) column.) However, after setting the option above, running the query, and clicking on the link in the result, I still get the following error: Unable to show XML. The following error happened: Unexpected end of file has occurred. Line 5, position 220160. One solution is to increase the number of characters retrieved from the server for XML data. To change this setting, on the Tools menu, click Options. So, any idea on how to access this data? Would converting the column to varchar(MAX) fix my woes?

    Read the article

  • Is the order of params important in NHibernate?

    - by Blake Blackwell
    If I have an int parameter followed by a string parameter in a sproc I get the following error: Input string was not in the correct format However, if I switch those parameters in the sproc than I get the result set I expect. Are params sorted by data type, or do I have to do anything special in my config file? I've included my code for reference: Config File <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="NHibernateDemo" namespace="NHibernateDemo.Domain"> <class name="Blake_Test" table="Blake_Test"> <id name="TestId" column="TESTID"></id> <property name="TestName" column="TESTNAME" /> <loader query-ref="GetBlakeTest"/> </class> <sql-query name="GetBlakeTest" callable="true"> <return class="Blake_Test" /> call procedure AREA51.NHIBERNATE_TEST.GetBlakeTest(:int_TestId, :vch_TestName) </sql-query> </hibernate-mapping> Sproc Code: PROCEDURE GetBlakeTest ( ret_cursor OUT SYS_REFCURSOR, int_testid integer, vch_testname varchar2 ) AS BEGIN OPEN ret_cursor FOR SELECT TestId, TestName FROM blake_test WHERE testid = int_testid ORDER BY TestName DESC; END GetBlakeTest; END NHIBERNATE_TEST; Executing Code: IQuery query1 = session.GetNamedQuery( "GetBlakeTest" ); query1.SetParameter( "int_TestId", 1 ); query1.SetParameter( "vch_TestName", "TEST" ); IList<Blake_Test> mystuff = query1.List<Blake_Test>();

    Read the article

  • Access report not showing data

    - by Brian Smith
    I have two queries that I am using to generate a report from, the problem is when I run the report, three fields do not show any data at all for some reason. Query 1: SELECT ClientSummary.Field3 AS PM, ClientSummary.[Client Nickname 2] AS [Project #], ClientSummary.[Client Nickname 1] AS Customer, ClientSummary.[In Reference To] AS [Job Name], ClientSummary.Field10 AS Contract, (select sum([Billable Slip Value]) from Util_bydate as U1 where U1.[Client Nickname 2] = ClientSummary.[Client Nickname 2]) AS [This Week], (select sum([Billable Slip Value]) from Util as U2 where U2.[Client Nickname 2] = ClientSummary.[Client Nickname 2] ) AS [To Date], [To Date]/[Contract] AS [% Spent], 0 AS Backlog, ClientSummary.[Total Slip Fees & Costs] AS Billed, ClientSummary.Payments AS Paid, ClientSummary.[Total A/R] AS Receivable, [Forms]![ReportMenu]![StartDate] AS [Start Date], [Forms]![ReportMenu]![EndDate] AS [End Date] FROM ClientSummary; Query 2: SELECT JobManagement_Summary.pm, JobManagement_Summary.[project #], JobManagement_Summary.Customer, JobManagement_Summary.[Job Name], JobManagement_Summary.Contract, IIf(IsNull([This Week]),0,[This Week]) AS [N_This Week], IIf(IsNull([To Date]),0,[To Date]) AS [N_To Date], [% Spent], JobManagement_Summary.Backlog, JobManagement_Summary.Billed, JobManagement_Summary.Paid, JobManagement_Summary.Receivable, JobManagement_Summary.[Start Date], JobManagement_Summary.[End Date] FROM JobManagement_Summary; When I run the report from query 2 these 3 fields don't appear. N_This Week, N_To Date and % Spent. All have no data. It isn't the IIF functions, as it doesn't matter if I have those in there or remove them. Any thoughts? If I connect directly to the first recordset it works fine, but then SQL throws the error message: Multi-level GROUP BY cause not allowed in subquery. Is there any way to get around that message to link to it directly or does anyone have ANY clue why these fields are coming back blank? I am at wits end here!

    Read the article

  • Ruby on Rails - Primary and Foreign key

    - by Eef
    Hey, I am creating a site in Ruby on Rails, I have two models a User model and a Transaction model. These models both belong to an account so they both have a field called account_id I am trying to setup a association between them like so: class User < ActiveRecord::Base belongs_to :account has_many :transactions end class Transaction < ActiveRecord::Base belongs_to :account belongs_to :user end I am using these associations like so: user = User.find(1) transactions = user.transactions At the moment the application is trying to find the transactions with the user_id, here is the SQL it generates: Mysql::Error: Unknown column 'transactions.user_id' in 'where clause': SELECT * FROM `transactions` WHERE (`transactions`.user_id = 1) This is incorrect as I would like the find the transactions via the account_id, I have tried setting the associations like so: class User < ActiveRecord::Base belongs_to :account has_many :transactions, :primary_key => :account_id, :class_name => "Transaction" end class Transaction < ActiveRecord::Base belongs_to :account belongs_to :user, :foreign_key => :account_id, :class_name => "User" end This almost achieves what I am looking to do and generates the following SQL: Mysql::Error: Unknown column 'transactions.user_id' in 'where clause': SELECT * FROM `transactions` WHERE (`transactions`.user_id = 104) The number 104 is the correct account_id but it is still trying to query the transaction table for a user_id field. Could someone give me some advice on how I setup the associations to query the transaction table for the account_id instead of the user_id resulting in a SQL query like so: SELECT * FROM `transactions` WHERE (`transactions`.account_id = 104) Cheers Eef

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >