Search Results

Search found 11543 results on 462 pages for 'partition wise join'.

Page 352/462 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Cookies with urllib

    - by CMC
    This will probably seem like a really simple question, and I am quite confused as to why this is so difficult for me. I would like to write a function that takes three inputs: [url, data, cookies] that will use urllib (not urllib2) to get the contents of the requested url. I figured it'd be simple, so I wrote the following: def fetch(url, data = None, cookies = None): if isinstance(data, dict): data = urllib.urlencode(data) if isinstance(cookies, dict): # TODO: find a better way to do this cookies = "; ".join([str(key) + "=" + str(cookies[key]) for key in cookies]) opener = urllib.FancyURLopener() opener.addheader("Cookie", cookies) obj = opener.open(url, data) result = obj.read() obj.close() return result This doesn't work, as far as I can tell (can anyone confirm that?) and I'm stumped.

    Read the article

  • How to avoid the same calculations on column values over and over again in a select?

    - by Peter
    I sometimes write SELECTs on the form: SELECT a.col1+b.col2*c.col4 as calc_col1, a.col1+b.col2*c.col4 + xxx as calc_col1_PLUS_MORE FROM .... INNER JOIN ... ON a.col1+b.col2*c.col4 < d.some_threshold WHERE a.col1+b.col2*c.col4 > 0 When the calculations get rather involved and used up to 3-5 times within the same SELECT, I would really like to refactor that out in a function or similar in order to 1) hopefully improve performance / make use of cache 2) avoid forgetting to update one of the 4 calculations when I at a later stage realize I need to change the calculation. I usually have these selects within SPs. Any ideas?

    Read the article

  • Why is my mysql database timestamp changing by itself?

    - by Scarface
    Hey guys quick question, I have an entry that I put in my database, and as I echo the value, the value in the database stays the same while the data echoed keeps increasing, which is messing up my function. If anyone knows whats going down, would appreciate any suggestions. <?php include("../includes/connection.php"); $query="SELECT * FROM points LEFT JOIN users ON points.user_id=users.id WHERE points.topic_id='82' AND users.username='gman'"; $check=mysql_query($query); while ($row=mysql_fetch_assoc($check)){ $points_id=$row['points_id']; echo $timestamp=$row['timestamp']; } ?>

    Read the article

  • What is the proper way to URL encode Unicode characters?

    - by Josh Gibson
    I know of the non-standard %uxxxx scheme but that doesn't seem like a wise choice since the scheme has been rejected by the W3C. Some interesting examples: The heart character. If I type this into my browser: http://www.google.com/search?q=? Then copy and paste it, I see this URL http://www.google.com/search?q=%E2%99%A5 which makes it seem like Firefox (or Safari) is doing this. urllib.quote_plus(x.encode("latin-1")) '%E2%99%A5' which makes sense, except for things that can't be encoded in Latin-1, like the triple dot character. … If I type the URL http://www.google.com/search?q=… into my browser then copy and paste, I get http://www.google.com/search?q=%E2%80%A6 back. Which seems to be the result of doing urllib.quote_plus(x.encode("utf-8")) which makes sense since … can't be encoded with Latin-1. But then its not clear to me how the browser knows whether to decode with UTF-8 or Latin-1. Since this seems to be ambiguous: In [67]: u"…".encode('utf-8').decode('latin-1') Out[67]: u'\xc3\xa2\xc2\x80\xc2\xa6' works, so I don't know how the browser figures out whether to decode that with UTF-8 or Latin-1. What's the right thing to be doing with the special characters I need to deal with?

    Read the article

  • even PHP has 'bugs' with IE

    - by silversky
    It's not a real bug BUT for sure it is not what you would expect. I have this sample code to upload images: <?php if($type=="image/jpg" || $type=="image/jpeg" || $type=="image/pjpeg" || $type=="image/tiff" || $type=="image/gif" || $type=="image/png") { // make upload else echo "Incorect format ...."; ?> The problem is that that if I modify the extention of an image, let's say to .jpgq or even .jpg% and i try to upload it FF and Chrome will say that the file"s type is "application/octet-stream" and normaly the condition will be false BUT since IE is 'smarter' that other brow. it will say that the file is "image/pjeg and the condition will be true and the file will be uploaded and of course latter any brow. will not be able to read / view the image. It is not a bug because on msdn.microsoft.com it says that: "If the "suggested" (server-provided) MIME type is unknown (not known and not ambiguous), FindMimeFromData immediately returns this MIME type" and "If the server-provided MIME type is either known or ambiguous, the buffer is scanned in an attempt to verify or obtain a MIME type from the actual content." plus others 'inovative solutions from Microsoft'. SO my questions are: Why is IE so 'smart' and when I upload the file to server it knows the real MIME type BUT it will fail to read it from the server ? How can i work around this issue (if the file doesn't have the right extention the condition has to be false)? Is it wise to check the extention format (and not the MIME type)? is any of the above extention not recomended to use ? Should I add others?

    Read the article

  • Filemaker - Getting field values from related table

    - by foobar
    I have the following setup in Filemaker Pro 10. Table1 with: id_table1, related_names Table2 with: id_table2, name, include and a joint-table with: id_table1, id_table2 Now I want either make related_names a calculated field or write a script that sets related_names to a comma separated list of all names which are connected through the joint-table and have Table2.include = True. So for example a data set could look like: Table1 id_table1, related_names 1, "foo,bar" 2, "foo" 3, "" joint-table id_table1, id_table2 1,1 1,2 1,3 2,1 Table2 id_table2, name, include 1, foo, True 2, bar, True 3, baz, False After searching the internet for a few hours the closest I came was a calculated field with list(join-table::id_table2) which gives me a list with a all the id_table2's. But now I would need to find the appropriate records in table2 and check the include field. I hope the problem is clear. any help is highly appreciated.

    Read the article

  • rails: include statement with two ON conditions

    - by Markus
    Hi, I have tree tables books bookmarks users where there is a n to m relation from books to users trough bookmarks. Im looking for a query, where I get all the books of a certain user including the bookmarks. If no bookmarks are there, there should be a null included... my sql statement looks like: SELECT * FROM `books` LEFT OUTER JOIN `bookmarks ` ON bookmarks.book_id = books.id AND bookmarks.user_id = ? In rails I only know the :include statement, but how can I add the second bookmarks.user_id = ? statement in the ON section of this query? if I put it in the :conditions part, no null results would get returned! Thanks! Markus

    Read the article

  • Downloading a Directory Tree with FTPLIB

    - by Anthony Lemmer
    I'd like to download a directory and all of its contents to the local HD. Here's the code I have thus far (crashes if there's a sub-directory, else grabs all the files): import ftplib import configparser import os def runBackups(): #Load INI filename = 'connections.ini' config = configparser.SafeConfigParser() config.read(filename) connections = config.sections() i = 0 while i < len(connections): #Load Settings uri = config.get(connections[i], "uri") username = config.get(connections[i], "username") password = config.get(connections[i], "password") backupPath = config.get(connections[i], "backuppath") archiveTo = config.get(connections[i], "archiveto") #Start Back-ups ftp = ftplib.FTP(uri) ftp.login(username, password) ftp.set_debuglevel(2) ftp.cwd(backupPath) files = ftp.nlst() for filename in files: ftp.retrbinary('RETR %s' % filename, open(os.path.join(archiveTo, filename), 'wb').write) ftp.quit() i += 1 print() print("Back-ups complete.") print()

    Read the article

  • How do I set up mod rewrite to do this?

    - by Ali
    Hi guys heres the scene - I'm building a web application which basically creates accounts for all users. Currently its setup like this the file structure: root/index.php root/someotherfile.php root/images/image-sub-folder/image.jpg root/js/somejs.js When users create an account they choose a fixed group name and then users can join that group. Initially I thought of having an extra textbox in the login screen to enter the group the user belongs to login to. BUt I would like instead to have something like virtual folders in this case: root/group-name/index.php I heard it can be done with apache mod rewrite but I'm not sure how to do this - any help here? Basically instead of having something like &group-name=yourGroupName appended to every page I would just like something of the nature above.

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • How to find N Consecutive records in a table using SQL

    - by user320587
    Hi, I have the following Table definition with sample data. In the following table, Customer Product & Date are key fields Table One Customer Product Date SALE X A 01/01/2010 YES X A 02/01/2010 YES X A 03/01/2010 NO X A 04/01/2010 NO X A 05/01/2010 YES X A 06/01/2010 NO X A 07/01/2010 NO X A 08/01/2010 NO X A 09/01/2010 YES X A 10/01/2010 YES X A 11/01/2010 NO X A 12/01/2010 YES In the above table, I need to find the N or N consecutive records where there was no sale, Sale value was 'NO' For example, if N is 2, the the result set would return the following Customer Product Date SALE X A 03/01/2010 NO X A 04/01/2010 NO X A 06/01/2010 NO X A 07/01/2010 NO X A 08/01/2010 NO Can someone help me with a SQL query to get the desired results. I am using SQL Server 2005. I started playing using ROW_NUMBER() AND PARTITION clauses but no luck. Thanks for any help

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • global scope of variable

    - by shantanuo
    The following shell scrip will check the disk space and change the variable "diskfull" to 1 if the usage is more than 10% The last echo always shows 0 I tried the global diskfull=1 in the if clause but it did not work. How do I change the variable to 1 if the disk consumed is more than 10% #!/bin/sh diskfull=0 ALERT=10 df -HP | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }' | while read output; do #echo $output usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 ) partition=$(echo $output | awk '{ print $2 }' ) if [ $usep -ge $ALERT ]; then diskfull=1 exit fi done echo $diskfull

    Read the article

  • How to get use text columns in a trigger

    - by Jeremy
    I am trying to use an update trigger in sql 2000 so that when an update occurs, I insert a row into a history table, so I retain all history on a table: CREATE Trigger trUpdate_MyTable ON MyTable FOR UPDATE AS INSERT INTO [MyTableHistory] ( [AuditType] ,[MyTable_ID] ,[Inserted] ,[LastUpdated] ,[LastUpdatedBy] ,[Vendor_ID] ,[FromLocation] ,[FromUnit] ,[FromAddress] ,[FromCity] ,[FromProvince] ,[FromContactNumber] ,[Comment]) SELECT [AuditType] = 'U', D.* FROM deleted D JOIN inserted I ON I.[ID] = D.[ID] GO Of course, I get an error "Cannot use text, ntext, or image columns in the 'inserted' and 'deleted' tables." I tried joining to MyTable instead of deleted, but because the insert triger fires after the insert, it ends up inserting the new record into the history table, when I want the original record. How can I do this and still use text columns?

    Read the article

  • accessing associations within GORM Events

    - by Don
    Hi, My Grails app has a User class and a Role class. The User class has an authorities property which holds the roles assigned to that user, e.g. class User { static hasMany = [authorities: Role] // This method is called automatically after a user is inserted def afterInsert() { this.authorities.size() } } If I create a user and assign them a role, a NullPointerException is thrown from the GORM event method afterInsert(), because authorities is null. If I comment out afterInsert() the user is saved correctly along with the assigned role. Is there some reason why I can't access associations from GORM event methods. Is is possible that this event is fired after the User row is inserted, but before the row is added to the User-Role join table? Thanks, Don

    Read the article

  • Are keys and values of %INC platform-dependent or not?

    - by codeholic
    I'd like to get the full filename of an included module. Consider this code: package MyTest; my $path = join '/', split /::/, __PACKAGE__; $path .= ".pm"; print "$INC{$path}\n"; 1; $ perl -Ipath/to/module -MMyTest -e0 path/to/module/MyTest.pm Will it work on all platforms? perlvar The hash %INC contains entries for each filename included via the do, require, or useoperators. The key is the filename you specified (with module names converted to pathnames), and the value is the location of the file found. Are these keys platform-dependent or not? Should I use File::Spec or what? At least ActivePerl on win32 uses / instead of \. Update: What about %INC values? Are they platform-dependent?

    Read the article

  • Having trouble using 'AND' in CONTAINSTABLE SQL SERVER FULL TEXT SEARCH

    - by Joshua
    I've been using FULL-TEXT for awhile but I cannot seem to get the most relevant results sometimes. If I have an field with something like "An Overview of Pain Medicine 5/12/2006" and a user types "An Overview 5/12/2006" So we create a search like: '"An" AND "Overview" AND "5/12/2006"' - 0 results (bad) '"Overview" AND "5/12/2006"' - 1 result (good) The CONTAINSTABLE portion of my query: FROM ce_Activity A INNER JOIN CONTAINSTABLE(View_Activities,(Searchable), @Search) AS KeyTbl ON A.ActivityID = KeyTbl.[KEY] "Searchable" is a field contains the activity title, and start date(converted to string) in one field so it's all search friendly. Why would this happen?

    Read the article

  • C#JSON serialization

    - by Bridget the Midget
    I'm trying out the HighStock library for creating stock charts. To fill the chart with data, their example specifies this source. The first parameter is unixtime in milliseconds and the second parameter is the stock closing price. I don't know if this is valid json, but I would argue that the following would be a more appropriate way of writing json. [{"Closing":63.15000,"Date":1262559600000},{"Closing":64.75000,"Date":1262646000000}, ... I guess that I have no other option than to adapt to HighStocks syntax. I could solve this by looping and add correct syntax to a string, but that seems rudimentary. Would it be more wise to serialize C# objects to create my json, and if that's the case - how can I reach the syntax specified in the example? Lets just say this is my c# object: public class Quote { public double Date { get; set; } public decimal Closing { get; set; } } Am I making it unnecessary complex? Should I just format a json string?

    Read the article

  • One table, need multiple values from different rows/tuples

    - by WmasterJ
    I have tables like: 'profile_values' userID | fid | value -------+---------+------- 1 | 3 | [email protected] 1 | 45 | 203-234-2345 3 | 3 | [email protected] 1 | 45 | 123-456-7890 And: 'users' userID | name -------+------- 1 | joe 2 | jane 3 | jake I want to join them and have one row with two of the values like: 'profile_values' userID | name | email | phone -------+-------+----------------+-------------- 1 | joe | [email protected] | 203-234-2345 2 | jane | [email protected] | 123-456-7890 I have solved it but it feels clumsy and I want to know if there is a better way to do it. Meaning solutions that are either more readable or faster(optimized) or simply best-practice. Current solution: multiple tables selected, many conditional statements: SELECT u.userID AS memberid, u.name AS first_name, pv1.value AS fname, pv2.value as lname FROM users AS u, profile_values AS pv1, profile_values AS pv2, WHERE u.userID = pv1.userID AND pv1.fid = 3 AND u.userID = pv2.userID AND pv2.fid = 45; Thanks for the help!

    Read the article

  • Problems using an id from a model inside a custom sql query in Rails

    - by Thiago
    Hi there, I want to do a model class which associates to itself on Rails. Basically, a user has friends, which are also users. I typed the following inside a User model class: has_many :friends, :class_name => "User", :foreign_key => :user_id, :finder_sql => %{SELECT users.* FROM users INNER JOIN friends ON (users.id = friends.user_id OR users.id = friends.friend_id) WHERE users.id <> #{id}} But the funny fact is that it seems that this finder_sql is called twice whenever I type User.first.friends on irb. Why?

    Read the article

  • Cleaning up temp folder after long-running subprocess exits

    - by dbr
    I have a Python script (running inside another application) which generates a bunch of temporary images. I then use subprocess to launch an application to view these. When the image-viewing process exists, I want to remove the temporary images. I can't do this from Python, as the Python process may have exited before the subprocess completes. I.e I cannot do the following: p = subprocess.Popen(["imgviewer", "/example/image1.jpg", "/example/image1.jpg"]) p.communicate() os.unlink("/example/image1.jpg") os.unlink("/example/image2.jpg") ..as this blocks the main thread, nor could I check for the pid exiting in a thread etc The only solution I can think of means I have to use shell=True, which I would rather avoid: cmd = ['imgviewer'] cmd.append("/example/image2.jpg") for x in cleanup: cmd.extend(["&&", "rm", x]) cmdstr = " ".join(cmd) subprocess.Popen(cmdstr, shell = True) This works, but is hardly elegant, and will fail with filenames containing spaces etc.. Basically, I have a background subprocess, and want to remove the temp files when it exits, even if the Python process no longer exists.

    Read the article

  • Select dynamic string has a different value when referenced in Where clause

    - by David
    I dynamically select a string built using another string. So, if string1='David Banner', then MyDynamicString should be 'DBanne' Select ... , Left( left((select top 1 strval from dbo.SPLIT(string1,' ')) //first word ,1) //first character + (select top 1 strval from dbo.SPLIT(string1,' ') //second word where strval not in (select top 1 strval from dbo.SPLIT(string1,' '))) ,6) //1st character of 1st word, followed by up to 5 characters of second word [MyDynamicString] ,... From table1 Join table2 on table1pkey=table2fkey Where MyDynamicString <> table2.someotherfield I know table2.someotherfield is not equal to the dynamic string. However, when I replace MyDynamicString in the Where clause with the full left(left(etc.. function, it works as expected. Can I not reference this string later in the query? Do I have to build it using the left(left(etc.. function each time in the where clause?

    Read the article

  • Piping SoX in Python - subprocess alternative?

    - by Cochise Ruhulessin
    I use SoX in an application. The application uses it to apply various operations on audiofiles, such as trimming. This works fine: from subprocess import Popen, PIPE kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} pipe = Popen(['sox','-t','mp3','-', 'test.mp3','trim','0','15'], **kwargs) output, errors = pipe.communicate(input=open('test.mp3','rb').read()) if errors: raise RuntimeError(errors) This will cause problems on large files hower, since read() loads the complete file to memory; which is slow and may cause the pipes' buffer to overflow. A workaround exists: from subprocess import Popen, PIPE import tempfile import uuid import shutil import os kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} tmp = os.path.join(tempfile.gettempdir(), uuid.uuid1().hex + '.mp3') pipe = Popen(['sox','test.mp3', tmp,'trim','0','15'], **kwargs) output, errors = pipe.communicate() if errors: raise RuntimeError(errors) shutil.copy2(tmp, 'test.mp3') os.remove(tmp) So the question stands as follows: Are there any alternatives to this approach, aside from writing a Python extension to the Sox C API?

    Read the article

  • Mysql SELECT nested query, very complicated?

    - by smartbear
    Okay, first following are my tables: Table house: id | items_id | 1 | 1,5,10,20 | Table items: id | room_name | refer 1 | kitchen | 3 5 | room1 | 10 Table kitchen: id | detail_name | refer 3 | spoon | 4 5 | fork | 10 Table spoon: id | name | color | price | quantity_available | 4 | spoon_a | white | 50 | 100 | 5 | spoon_b | black | 30 | 200 | How to do a nested select statement, where I want to select id, name, color, price and quantity_available column, from the each value inside the 'items_id' column in 'house' table? This is very challenging!! EDIT: after read robin's answer Table house: id | items_id | house1 | 1 | house1 | 5 | house1 | 10 | house2 | 20 | If this it the house table, how to do the nested, join, or whatever select statement??

    Read the article

  • My First F# program

    - by sudaly
    Hi I just finish writing my first F# program. Functionality wise the code works the way I wanted, but not sure if the code is efficient. I would much appreciate if someone could review the code for me and point out the areas where the code can be improved. Thanks Sudaly open System open System.IO open System.IO.Pipes open System.Text open System.Collections.Generic open System.Runtime.Serialization [<DataContract>] type Quote = { [<field: DataMember(Name="securityIdentifier") >] RicCode:string [<field: DataMember(Name="madeOn") >] MadeOn:DateTime [<field: DataMember(Name="closePrice") >] Price:float } let m_cache = new Dictionary<string, Quote>() let ParseQuoteString (quoteString:string) = let data = Encoding.Unicode.GetBytes(quoteString) let stream = new MemoryStream() stream.Write(data, 0, data.Length); stream.Position <- 0L let ser = Json.DataContractJsonSerializer(typeof<Quote array>) let results:Quote array = ser.ReadObject(stream) :?> Quote array results let RefreshCache quoteList = m_cache.Clear() quoteList |> Array.iter(fun result->m_cache.Add(result.RicCode, result)) let EstablishConnection() = let pipeServer = new NamedPipeServerStream("testpipe", PipeDirection.InOut, 4) let mutable sr = null printfn "[F#] NamedPipeServerStream thread created, Wait for a client to connect" pipeServer.WaitForConnection() printfn "[F#] Client connected." try // Stream for the request. sr <- new StreamReader(pipeServer) with | _ as e -> printfn "[F#]ERROR: %s" e.Message sr while true do let sr = EstablishConnection() // Read request from the stream. printfn "[F#] Ready to Receive data" sr.ReadLine() |> ParseQuoteString |> RefreshCache printfn "[F#]Quot Size, %d" m_cache.Count let quot = m_cache.["MSFT.OQ"] printfn "[F#]RIC: %s" quot.RicCode printfn "[F#]MadeOn: %s" (String.Format("{0:T}",quot.MadeOn)) printfn "[F#]Price: %f" quot.Price

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >