Search Results

Search found 72722 results on 2909 pages for 'file processing'.

Page 82/2909 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Python .app doesn't read .txt file like it should

    - by Bambo
    This question relates to this one: Python app which reads and writes into its current working directory as a .app/exe i got the path to the .txt file fine however now when i try to open it and read the contents it seems that it doesn't extract the data properly. Here's my code - http://pastie.org/4876896 These are the errors i'm getting: 30/09/2012 10:28:49.103 [0x0-0x4e04e].org.pythonmac.unspecified.main: for index, item in enumerate( lines ): # iterate through lines 30/09/2012 10:28:49.103 [0x0-0x4e04e].org.pythonmac.unspecified.main: TypeError: 'NoneType' object is not iterable I kind of understand what the errors mean however i'm not sure why they are being flagged up because if i run my script with it not in a .app form it doesn't get these errors and extracts the data fine.

    Read the article

  • Disable MSBuild output of "Processing /ORDER options..."

    - by Jippers
    The output file from our project build has gone from 6MB to over 75MB in text. Diff'ing the last good build and the first time it blew up, there's a section in the output file like this in the latest: Processing /ORDER options External code objects not listed in the /ORDER file: ?onCallDisconnected@CallStateConnected@CallImpl@space@@UAEXV?$shared_ptr@VCallImpl@space@@@boost@@V?$shared_ptr@VGenericCall@space@@@5@K@Z ; framework.lib(CallStates.obj) ??_DBoolSetting@space@@QAEXXZ ; framework.lib(SettingValueImpl.obj) ...... continues for ~50MB ??$?0U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@?$allocator@U_Node@?$_Tree_nod@V?$_Tmap_traits@V?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@JU?$less@V?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@@2@V?$allocator@U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@2@$0A@@std@@@std@@@std@@QAE@ABV?$allocator@U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@1@@Z ; CallStatistics.obj Finished processing /ORDER options I'm not sure how this got in there, but anyone know how to turn it off?

    Read the article

  • Live Mesh has screwed up my file permissions

    - by Jason
    I got the bright idea of using Live Mesh to sync up my development directories between my laptop and desktop machines. It appears that the permission on any new files that are added through Live Mesh do not inherit permissions from the parent directory. Now I cannot overwrite the permissions on those files. I keep getting an "Access is Denied" error when attempting to do so, even if I am running Windows Explorer as administrator. I have two questions: How can I modify the file permissions to allow them to inherit again? Has anyone used Live Mesh to do this sort of thing? Or should I be using FolderShare instead?

    Read the article

  • Batch processing JDBC

    - by Wai Hein
    I am practicing JDBC batch processing and having errors: error 1: Unsupported feature error 2: Execute cannot be empty or null Property files include: itemsdao.updateBookName = Update Books set bookname = ? where books.id = ? itemsdao.updateAuthorName = Update books set authorname = ? where books.id = ? I know I can execute about DML statements in one update, but I am practicing batch processing in JDBC. Below is my method public void update(Item item) { String query = null; try { connection = DbConnector.getConnection(); property = SqlPropertiesLoader.getProperties("dml.properties"); connection.setAutoCommit(false); if ( property == null ) { Logging.log.debug("dml.properties does not exist. Check property loader or file name is spelled right"); return; } query = property.getProperty("itemsdao.updateBookName"); statement = connection.prepareStatement(query); statement.setString(1, item.getBookName()); statement.setInt(2, item.getId()); statement.addBatch(query); query = property.getProperty("itemsdao.updateAuthorName"); statement = connection.prepareStatement(query); statement.setString(1, item.getAuthorName()); statement.setInt(2, item.getId()); statement.addBatch(query); statement.executeBatch(); connection.commit(); }catch (ClassNotFoundException e) { Logging.log.error("Connection class does not exist", e); } catch (SQLException e) { Logging.log.error("Violating PK constraint",e); } //helper class th finally { DbUtil.close(connection); DbUtil.closePreparedStatement(statement); }

    Read the article

  • Optimizing memory usage and changing file contents with PHP

    - by errata
    In a function like this function download($file_source, $file_target) { $rh = fopen($file_source, 'rb'); $wh = fopen($file_target, 'wb'); if (!$rh || !$wh) { return false; } while (!feof($rh)) { if (fwrite($wh, fread($rh, 1024)) === FALSE) { return false; } } fclose($rh); fclose($wh); return true; } what is the best way to rewrite last few bytes of a file with my custom string? Thanks!

    Read the article

  • How to save multiple UIImage to a file iPad

    - by aron
    I have a PDF reader that displays pages of the document. What I want to do is allow the user to draw over the PDF in a transparent view. Then I want to save the drawing (UIImage) to disk. If at all possible, I don't want to have the documents folder filled with files like documentName_page01.png, documentName_page02.png for every page that is drawn over. However, I can't figure out how to store these UIImages into a single file without it becoming unwieldy and memory intensive. Any ideas appreciated.

    Read the article

  • Reading in data from a file into an array

    - by Sam
    If I have an options file along the lines of this: size = 4 data = 1100010100110010 And I have a 2d size * size array that I want to populate the values in data into, what's the best way of doing it? To clarify, for the example I have I'd want an array like this: int[4][4] array = {{1,1,0,0}, {0,1,0,1}, {0,0,1,1}, {0,0,1,0}}. (Not real code but you get the idea). Size can be really be any number though. I'm thinking I'd have to read in the size, maloc an array and then maybe read in a string full of data then loop through each char in the data, cast it to an int and stick it in the appropriate index? But I really have no idea how to go about it, have been searching for a while with no luck. Any help would be cool! :)

    Read the article

  • Split file with PHP and generate contents

    - by user201140
    How do I split the content below into separate files without the placeholder tags. I'd also like to take the text inside the placeholder tags and place them inside a new contents file. <div class='placeholder'>The First Chapter</div> This is some text. <div class='placeholder'>The Second Chapter</div> This is some more text. <div class='placeholder'>Last Chapter</div> The last chapter. Thanks.

    Read the article

  • Filtering Data in a Text File with Python

    - by YAS
    I'm new to Python (like Zygote new), and it's just to supplement another program but what I need is I have a text file that's a group of items for a game and it is formatted so: [1] Name=Blah Faction=Blahdiddly Cost=1000 [2] Name=Meh Faction=MehMeh Cost=2000 [3] Name=Lollypop Faction=Blahdiddly Cost=100 And I need to be able to find out what groups (the numbers in brackets) have matching values. So if I search Faction=Blahdiddly Group 1 & 3 will come up. I unfortunately have NO idea how to do this. Can anyone help?

    Read the article

  • Groovy and XML: Not able to insert processing instruction

    - by rhellem
    Scenario Need to update some attributes in an existing XML-file. The file contains a XSL processing instruction, so when the XML is parsed and updated I need to add the instruction before writing it to a file again. Problem is - whatever I do - I'm not able to insert the processing instruction Based on the Java-example found at rgagnon.com I have created the code below Example code ## import groovy.xml.* def xml = '''|<something> | <Settings> | </Settings> |</something>'''.stripMargin() def document = DOMBuilder.parse( new StringReader( xml ) ) def pi = document.createProcessingInstruction('xml-stylesheet', 'type="text/xsl" href="Bp8DefaultView.xsl"'); document.insertBefore(pi, document.documentElement) println document.documentElement Creates output <?xml version="1.0" encoding="UTF-8"?> <something> <Settings> </Settings> </something> What I want <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="Bp8DefaultView.xsl"?> <something> <Settings> </Settings> </something>

    Read the article

  • Assistance with CC Processing script

    - by JM4
    I am currently implementing a credit card processing script, most as provided by the merchant gateway. The code calls functions within a class and returns a string based on the response. The end php code I am using (details removed of course) with example information is: <?php $gw = new gwapi; $gw->setLogin("username", "password"); $gw->setBilling("John","Smith","Acme, Inc.","888","Suite 200", "Beverly Hills", "CA","77777","US","555-555-5555","555-555-5556","[email protected]", "www.example.com"); // "CA","90210","US","[email protected]"); $gw->setOrder("1234","Big Order",1, 2, "PO1234","65.192.14.10"); $r = $gw->doSale("1.00","4111111111111111","1010"); print $gw->responses['responsetext']; ?> where setlogin allows me to login, setbilling takes the sample consumer information, set order takes the order id and description, dosale takes the amount charged, cc number and exp date. when all the variables are sent validated then sent off for processing, a string is returned in the following format: response=1&responsetext=SUCCESS&authcode=123456&transactionid=23456&avsresponse=M&orderid=&type=sale&response_code=100 where: response = transaction approved or declined response text = textual response authcode = transaction authorization code transactionid = payment gateway tran id avsresponse = avs response code orderid = original order id passed in tran request response_code = numeric mapping of processor response I am trying to solve for the following: How do I take the data which is passed back and display it appropriately on the page - If the transaction failed or AVS code doesnt match my liking or something is wrong, an error is displayed to the consumer; if the transaction processed, they are taken to a completion page and the transaction id is sent in SESSION as output to the consumer If the response_code value matches a table of values, certain actions are taken, i.e. if code =100, take to success page, if code = 300 print specific error on original page to customer, etc.

    Read the article

  • Python performance profiling (file close)

    - by user1853986
    First of all thanks for your attention. My question is how to reduce the execution time of my code. Here is the relevant code. The below code is called in iteration from the main. def call_prism(prism_input_file,random_length): prism_output_file = "path.txt" cmd = "prism %s -simpath %d %s" % (prism_input_file,random_length,prism_output_file) p = os.popen(cmd) p.close() return prism_output_file def main(prism_input_file, number_of_strings): ... for n in range(number_of_strings): prism_output_file = call_prism(prism_input_file,z[n]) ... return I used statistics from the "profile statistics browser" when I profiled my code. The "file close" system command took the maximum time (14.546 seconds). The call_prism routine is called 10 times. But the number_of_strings is usually in thousands, so, my program takes lot of time to complete. Let me know if you need more information. By the way I tried with subprocess, too. Thanks.

    Read the article

  • How i can fix " E: Internal Error, No file name for libc6 "

    - by SMAOUH
    Hello all please i need your help to fix this problem i have 2 broken packages system and i can't reinstall them or make any other option : update , upgrade , install & remove app .... Ubuntu 12.04.3 I have not found any solutions please help me sudo apt-get install -f smaouh@Linux:~$ sudo apt-get install -f [sudo] password for smaouh: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libopenal1 libpam-winbind libao-common gnome-exe-thumbnailer libqca2-plugin-ossl gir1.2-champlain-0.12 libmagickcore4 libmagickwand4 libmagickcore4-extra libcapi20-3 python-unidecode libopenal-data liblqr-1-0 gir1.2-gtkchamplain-0.12 unixodbc wine-gecko2.21 libchamplain-0.12-0 python-glade2 imagemagick-common libosmesa6 oss-compat gimp-help-common esound-common gimp-help-en libmpg123-0 ttf-mscorefonts-installer imagemagick winbind libodbc1 fonts-droid fonts-unfonts-core libchamplain-gtk-0.12-0 libclutter-gtk-1.0-0 gir1.2-gtkclutter-1.0 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 386 not upgraded. 4 not fully installed or removed. After this operation, 0B of additional disk space will be used. dpkg: error processing libc6 (--configure): libc6:amd64 2.15-0ubuntu10.5 cannot be configured because libc6:i386 is in a different version (2.15-0ubuntu10.4) dpkg: dependency problems prevent configuration of libc-dev-bin: libc-dev-bin depends on libc6 (>> 2.15); however: Package libc6 is not configured yet. libc-dev-bin depends on libc6 (<< 2.16); however: Package libc6 is not configured yet. dpkg: error processing libc-dev-bin (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libc6-dev: libc6-dev depends on libc6 (= 2.15-0ubuntu10.5); however: Package libc6 is not configured yet. libc6-dev depends on libc-dev-bin (= 2.15-0ubuntu10.5); however: Package libc-dev-bin is not configured yet. dpkg: error processing libc6-dev (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libc6-i386: libc6-i386 depends on libc6 (= 2.15-0ubuntu10.5); however: Package libc6 is not configured yet. dpkg: error processing libc6-i386 (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already Errors were encountered while processing: libc6 libc-dev-bin libc6-dev libc6-i386 E: Sub-process /usr/bin/dpkg returned an error code (1) smaouh@Linux:~$

    Read the article

  • How can I do a large file upload using Sinatra, haml, nginx, and passenger?

    - by mmr
    Hi all, I need to be able to allow a user to upload 30-60 mb files at a time. Right now, I'm solving the problem with a simple form post: %form{:action=>"/Upload",:method=>"post",:enctype=>"multipart/form-data"} - @theModelHash.each do |key,value| %br %input{:type=>"checkbox", :name=>"#{key}", :value=>1, :checked=>value} =key %br %input{:type=>"file",:name=>"file"} %input{:type=>"submit",:value=>"Upload"} This form allows the user to select processing options contained in theModelHash and upload a file for processing. Problem is, this method both freezes the user's UI and also requires that the entire form be reposted when the user presses the 'back' button. I've looked at SWFUpload, but have no idea how to integrate that into my relatively simple app. There's a page here about integrating it with Rails, but I'm using Sinatra, and am new enough to this whole web programming thing that I don't know how to modify those files to work with what I need to do. Is there a how-to to add large file uploads to my form there? Something relatively simple that just adds in a progress bar and doesn't repost? I feel like I'm having to triple the size of my application just to make this feature play nice, and that's bothering me a bit.

    Read the article

  • Versioning freindly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizeable data structure to disk. Being in optimist I thought their must be a standard solution for such a problem however upto now I haven't found a solution that satisfies the following requirements: .net 2.0 support, preferably with a foss implementation version friendly (this should be interpreted as reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) space and time efficient (xml has been excluded as option given this requierement) Options considered so far: Protocol Buffers : was turned down by verdict of the documentation about Large Data Sets since this comment suggest adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI : do not seem to have .net implementations SQLite : the data structure at hand would result in a pretty complex table structure that seems to heavyweight for the intended use BSON : does not appear to support requirement 3. Fast Infoset : only seems to have buyware .net implementations Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true please provide pointers/examples to proove me wrong.

    Read the article

  • Flow control in a batch file

    - by dboarman-FissureStudios
    Reference Iterating arrays in a batch file I have the following: for /f "tokens=1" %%Q in ('query termserver') do ( if not ERRORLEVEL ( echo Checking %%Q for /f "tokens=1" %%U in ('query user %UserID% /server:%%Q') do (echo %%Q) ) ) When running query termserver from the command line, the first two lines are: Known ------------------------- ...followed by the list of terminal servers. However, I do not want to include these as part of the query user command. Also, there are about 4 servers I do not wish to include. When I supply UserID with this code, the program is promptly exiting. I know it has something to do with the if statement. Is this not possible to nest flow control inside the for-loop? I had tried setting a variable to exactly the names of the servers I wanted to check, but the iteration would end on the first server: set TermServers=Server1.Server2.Server3.Server7.Server8.Server10 for /f "tokens=2 delims=.=" %%Q in ('set TermServers') do ( echo Checking %%Q for /f "tokens=1" %%U in ('query user %UserID% /server:%%Q') do (echo %%Q) ) I would prefer this second example over the first if nothing else for cleanliness. Any help regarding either of these issues would be greatly appreciated.

    Read the article

  • Versioning friendly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizable data structure to disk (edit: think dozens of MB's). Being an optimist, I thought that there must be a standard solution for such a problem; however, up to now I haven't found a solution that satisfies the following requirements: .NET 2.0 support, preferably with a FOSS implementation Version friendly (this should be interpreted as: reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) Ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) Space and time efficient (XML has been excluded as option given this requirement) Options considered so far: Protocol Buffers: was turned down by verdict of the documentation about Large Data Sets - since this comment suggested adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI: do not seem to have .net implementations SQLite/SQL Server Compact edition: the data structure at hand would result in a pretty complex table structure that seems too heavyweight for the intended use BSON: does not appear to support requirement 3. Fast Infoset: only seems to have paid .NET implementations. Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true, please provide pointers/examples to prove me wrong.

    Read the article

  • How to save a picture to a file?

    - by Peter vdL
    I'm trying to use a standard Intent that will take a picture, then allow approval or retake. Then I want to save the picture into a file. Here's the Intent I am using: Intent intent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE ); startActivityForResult( intent, 22 ); The docs at http://developer.android.com/reference/android/provider/MediaStore.html#ACTION_IMAGE_CAPTURE say "The caller may pass an extra EXTRA_OUTPUT to control where this image will be written. If the EXTRA_OUTPUT is not present, then a small sized image is returned as a Bitmap object in the extra field. If the EXTRA_OUTPUT is present, then the full-sized image will be written to the Uri value of EXTRA_OUTPUT." I don't pass extra output, I hope to get a Bitmap object in the extra field of the Intent passed into onActivityResult() (for this request). So where/how do you extract it? Intent has a getExtras(), but that returns a Bundle, and Bundle wants a key string to give you something back. What do you invoke on the Intent to extract the bitmap?

    Read the article

  • aio_read from file error on OS X

    - by Pyetras
    The following code: #include <fcntl.h> #include <unistd.h> #include <stdio.h> #include <aio.h> #include <errno.h> int main (int argc, char const *argv[]) { char name[] = "abc"; int fdes; if ((fdes = open(name, O_RDWR | O_CREAT, 0600 )) < 0) printf("%d, create file", errno); int buffer[] = {0, 1, 2, 3, 4, 5}; if (write(fdes, &buffer, sizeof(buffer)) == 0){ printf("writerr\n"); } struct aiocb aio; int n = 2; while (n--){ aio.aio_reqprio = 0; aio.aio_fildes = fdes; aio.aio_offset = sizeof(int); aio.aio_sigevent.sigev_notify = SIGEV_NONE; int buffer2; aio.aio_buf = &buffer2; aio.aio_nbytes = sizeof(buffer2); if (aio_read(&aio) != 0){ printf("%d, readerr\n", errno); }else{ const struct aiocb *aio_l[] = {&aio}; if (aio_suspend(aio_l, 1, 0) != 0){ printf("%d, suspenderr\n", errno); }else{ printf("%d\n", *(int *)aio.aio_buf); } } } return 0; } Works fine on Linux (Ubuntu 9.10, compiled with -lrt), printing 1 1 But fails on OS X (10.6.6 and 10.6.5, I've tested it on two machines): 1 35, readerr Is this possible that this is due to some library error on OS X, or am I doing something wrong?

    Read the article

  • [UNIX] Sort lines of massive file by number of words on line (ideally in parallel)

    - by conradlee
    I am working on a community detection algorithm for analyzing social network data from Facebook. The first task, detecting all cliques in the graph, can be done efficiently in parallel, and leaves me with an output like this: 17118 17136 17392 17064 17093 17376 17118 17136 17356 17318 12345 17118 17136 17356 17283 17007 17059 17116 Each of these lines represents a unique clique (a collection of node ids), and I want to sort these lines in descending order by the number of ids per line. In the case of the example above, here's what the output should look like: 17118 17136 17356 17318 12345 17118 17136 17356 17283 17118 17136 17392 17064 17093 17376 17007 17059 17116 (Ties---i.e., lines with the same number of ids---can be sorted arbitrarily.) What is the most efficient way of sorting these lines. Keep the following points in mind: The file I want to sort could be larger than the physical memory of the machine Most of the machines that I'm running this on have several processors, so a parallel solution would be ideal An ideal solution would just be a shell script (probably using sort), but I'm open to simple solutions in python or perl (or any language, as long as it makes the task simple) This task is in some sense very easy---I'm not just looking for any old solution, but rather for a simple and above all efficient solution

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Batch File to Delete Folders

    - by Homebrew
    I found some code to delete folders, in this case deleting all but 'n' # of folders. I created 10 test folders, plus 1 that was already there. I want to delete all but 4. The code works, it leaves 4 of my test folders, except that it also leaves the other folder. Is there some attribute of the other folder that's getting checked in the batch file that's stopping it from getting deleted ? It was created through a job a couple of weeks ago. Here's the code I stole (but don't really understand the details): rem DOS - Delete Folders if # folders > n @Echo Off :: User Variables :: Set this to the number of folders you want to keep Set _NumtoKeep=4 :: Set this to the folder that contains the folders to check and delete Set _Path=C:\MyFolder_Temp\FolderTest If Exist "%temp%\tf}1{" Del "%temp%\tf}1{" PushD %_Path% Set _s=%_NumtoKeep% If %_NumtoKeep%==1 set _s=single For /F "tokens=* skip=%_NumtoKeep%" %%I In ('dir "%_Path%" /AD /B /O-D /TW') Do ( If Exist "%temp%\tf}1{" ( Echo %%I:%%~fI >>"%temp%\tf}1{" ) Else ( Echo.>"%temp%\tf}1{" Echo Do you wish to delete the following folders?>>"%temp%\tf}1{" Echo Date Name>>"%temp%\tf}1{" Echo %%I:%%~fI >>"%temp%\tf}1{" )) PopD If Not Exist "%temp%\tf}1{" Echo No Folders Found to delete & Goto _Done Type "%temp%\tf}1{" | More Set _rdflag= /q Goto _Removeold Set _rdflag= :_Removeold For /F "tokens=1* skip=3 Delims=:" %%I In ('type "%temp%\tf}1{"') Do ( If "%_rdflag%"=="" Echo Deleting rd /s%_rdflag% "%%J") :_Done If Exist "%temp%\tf}1{" Del "%temp%\tf}1{"

    Read the article

  • loading files through one file to hide locations

    - by Phil Jackson
    Hello all. Im currently doing a project in which my client does not want to location ( ie folder names and locations ) to be displayed. so I have done something like this: <link href="./?0000=css&0001=0001&0002=css" rel="stylesheet" type="text/css" /> <link href="./?0000=css&0001=0002&0002=css" rel="stylesheet" type="text/css" /> <script src="./?0000=js&0001=0000&0002=script" type="text/javascript"></script> </head> <body> <div id="wrapper"> <div id="header"> <div id="left_header"> <img src="./?0000=jpg&0001=0001&0002=pic" width="277" height="167" alt="" /> </div> <div id="right_header"> <div id="top-banner"></div> <ul id="navigation"> <li><a href="#" title="#" id="nav-home">Home</a></li> <li><a href="#" title="#">Signup</a></li> all works but my question being is or will this cause any complications i.e. speed of the site as all requests are being made to one single file and then loading in the appropriate data. Regards, Phil

    Read the article

  • C++ File manipulation problem

    - by Carlucho
    I am trying to open a file which normally has content, for the purpose of testing i will like to initialize the program without the files being available/existing so then the program should create empty ones, but am having issues implementing it. This is my code originally void loadFiles() { fstream city; city.open("city.txt", ios::in); fstream latitude; latitude.open("lat.txt", ios::in); fstream longitude; longitude.open("lon.txt", ios::in); while(!city.eof()){ city >> cityName; latitude >> lat; longitude >> lon; t.add(cityName, lat, lon); } city.close(); latitude.close(); longitude.close(); } I have tried everything i can think of, ofstream, ifstream, adding ios::out all all its variations. Could anybody explain me what to do in order to fix the problem. Thanks!

    Read the article

  • NullReferenceExeption when reading from a file

    - by Whitey
    I need to read a file structured like this: 01000 00030 00500 03000 00020 And put it in an array like this: int[,] iMap = new int[iMapHeight, iMapWidth] { {0, 1, 0, 0, 0}, {0, 0, 0, 3, 0}, {0, 0, 5, 0, 0}, {0, 3, 0, 0, 0}, {0, 0, 0, 2, 0}, }; Hopefully you see what I'm trying to do here. I was confused how to do this so I asked here on SO, but the code I got from it gets this error: Object reference not set to an instance of an object. I'm pretty new to this so I have no idea how to fix it... I only barely know the code: protected void ReadMap(string mapPath) { using (var reader = new StreamReader(mapPath)) { for (int i = 0; i < iMapHeight; i++) { string line = reader.ReadLine(); for (int j = 0; j < iMapWidth; j++) { iMap[i, j] = (int)(line[j] - '0'); } } } } The line I get the error on is this: iMap[i, j] = (int)(line[j] - '0'); Can anyone provide a solution? Thank you. :)

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >