Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 131/1833 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • How would you handle making an array or list that would have more entries than the standard implemen

    - by faceless1_14
    I am trying to create an array or list that could handle in theory, given adequate hardware and such, as many as 100^100 BigInteger entries. The problem with using an array or standard list is that they can only hold Integer.MAX_VALUE number of entries. How would you work around this limitations? A whole new class/interface? A wrapper for list? another data type entirely?

    Read the article

  • Editing a 9gb .sql file

    - by CERIQ
    Hi. I've got a "slightly" large sql script saved as a textfile. It totals in at 8.92gb, so it's a bit of a beast. I've got to do some search and replaces in this file(specifically, change all NOT NULL to NULL, so all fields are nullable) and then execute the darned thing. Does anyone have any suggestions for a text editor that would be capable of this? The other way that I can see to solve the problem is to write a program that reads a chunk, does a replace on the stuff I need, and then save it to a new file, but I'd rather use some standard way of doing this. It also does not solve the problem of opening the beast up in sql server management studio to execute the darned thing... Any ideas? Thanks, Eric

    Read the article

  • expat parser: memory consumption

    - by sameer karjatkar
    Hi, I am using expat parser to parse an XML file of around 15 GB . The problem is it throws an "Out of Memory" error and the program aborts . I want to know has any body faced a similar issue with the expat parser or is it a known bug and has been rectified in later versions ?

    Read the article

  • search & replace on 3000 row, 25 column spreadsheet

    - by Deca
    I'm attempting to clean up data in this (old) spreadsheet and need to remove things like single and double quotes, HTML tags and so on. Trouble is, it's a 3000 row file with 25 columns and every spreadsheet app I've tried (NeoOffice, MS Excel, Apple Numbers) chokes on it. Hard. Any ideas on how else I can clean this thing up for import to MySQL? Clearly I could go through each record manually, row by row, but would like to avoid that if at all possible. Likewise, I could write a PHP script to handle it on import, but don't want to put the server into a death spiral either.

    Read the article

  • How do quickly search through a .csv file in Python

    - by Baldur
    I'm reading a 6 million entry .csv file with Python, and I want to be able to search through this file for a particular entry. Are there any tricks to search the entire file? Should you read the whole thing into a dictionary or should you perform a search every time? I tried loading it into a dictionary but that took ages so I'm currently searching through the whole file every time which seems wasteful. Could I possibly utilize that the list is alphabetically ordered? (e.g. if the search word starts with "b" I only search from the line that includes the first word beginning with "b" to the line that includes the last word beginning with "b") I'm using import csv. (a side question: it is possible to make csv go to a specific line in the file? I want to make the program start at a random line) Edit: I already have a copy of the list as an .sql file as well, how could I implement that into Python?

    Read the article

  • Performing Aggregate Functions on Multi-Million Row Tables

    - by Daniel Short
    I'm having some serious performance issues with a multi-million row table that I feel I should be able to get results from fairly quick. Here's a run down of what I have, how I'm querying it, and how long it's taking: I'm running SQL Server 2008 Standard, so Partitioning isn't currently an option I'm attempting to aggregate all views for all inventory for a specific account over the last 30 days. All views are stored in the following table: CREATE TABLE [dbo].[LogInvSearches_Daily]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [Inv_ID] [int] NOT NULL, [Site_ID] [int] NOT NULL, [LogCount] [int] NOT NULL, [LogDay] [smalldatetime] NOT NULL, CONSTRAINT [PK_LogInvSearches_Daily] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] This table has 132,000,000 records, and is over 4 gigs. A sample of 10 rows from the table: ID Inv_ID Site_ID LogCount LogDay -------------------- ----------- ----------- ----------- ----------------------- 1 486752 48 14 2009-07-21 00:00:00 2 119314 51 16 2009-07-21 00:00:00 3 313678 48 25 2009-07-21 00:00:00 4 298863 0 1 2009-07-21 00:00:00 5 119996 0 2 2009-07-21 00:00:00 6 463777 534 7 2009-07-21 00:00:00 7 339976 503 2 2009-07-21 00:00:00 8 333501 570 4 2009-07-21 00:00:00 9 453955 0 12 2009-07-21 00:00:00 10 443291 0 4 2009-07-21 00:00:00 (10 row(s) affected) I have the following index on LogInvSearches_Daily: /****** Object: Index [IX_LogInvSearches_Daily_LogDay] Script Date: 05/12/2010 11:08:22 ******/ CREATE NONCLUSTERED INDEX [IX_LogInvSearches_Daily_LogDay] ON [dbo].[LogInvSearches_Daily] ( [LogDay] ASC ) INCLUDE ( [Inv_ID], [LogCount]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] I need to pull inventory only from the Inventory for a specific account id. I have an index on the Inventory as well. I'm using the following query to aggregate the data and give me the top 5 records. This query is currently taking 24 seconds to return the 5 rows: StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SELECT TOP 5 Sum(LogCount) AS Views , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , Inv_ID FROM LogInvSearches_Daily D (NOLOCK) WHERE LogDay DateAdd(d, -30, getdate()) AND EXISTS( SELECT NULL FROM propertyControlCenter.dbo.Inventory (NOLOCK) WHERE Acct_ID = 18731 AND Inv_ID = D.Inv_ID ) GROUP BY Inv_ID (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--Top(TOP EXPRESSION:((5))) |--Sequence Project(DEFINE:([Expr1007]=dense_rank)) |--Segment |--Segment |--Sort(ORDER BY:([Expr1006] DESC, [D].[Inv_ID] DESC)) |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1006]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1011], [Expr1012], [Expr1010])) | |--Compute Scalar(DEFINE:(([Expr1011],[Expr1012],[Expr1010])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1011] AND [D].[LogDay] < [Expr1012]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID]), SEEK:([propertyControlCenter].[dbo].[Inventory].[Acct_ID]=(18731) AND [propertyControlCenter].[dbo].[Inventory].[Inv_ID]=[LOA (13 row(s) affected) I tried using a CTE to pick up the rows first and aggregate them, but that didn't run any faster, and gives me essentially the same execution plan. (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --SET SHOWPLAN_TEXT ON; WITH getSearches AS ( SELECT LogCount -- , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , D.Inv_ID FROM LogInvSearches_Daily D (NOLOCK) INNER JOIN propertyControlCenter.dbo.Inventory I (NOLOCK) ON Acct_ID = 18731 AND I.Inv_ID = D.Inv_ID WHERE LogDay DateAdd(d, -30, getdate()) -- GROUP BY Inv_ID ) SELECT Sum(LogCount) AS Views, Inv_ID FROM getSearches GROUP BY Inv_ID (1 row(s) affected) StmtText ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1004]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1008], [Expr1009], [Expr1007])) | |--Compute Scalar(DEFINE:(([Expr1008],[Expr1009],[Expr1007])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1008] AND [D].[LogDay] < [Expr1009]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID] AS [I]), SEEK:([I].[Acct_ID]=(18731) AND [I].[Inv_ID]=[LOALogs].[dbo].[LogInvSearches_Daily].[Inv_ID] as [D].[Inv_ID]) ORDERED FORWARD) (8 row(s) affected) (1 row(s) affected) So given that I'm getting good Index Seeks in my execution plan, what can I do to get this running faster? Thanks, Dan

    Read the article

  • Random access gzip stream

    - by jkff
    I'd like to be able to do random access into a gzipped file. I can afford to do some preprocessing on it (say, build some kind of index), provided that the result of the preprocessing is much smaller than the file itself. Any advice? My thoughts were: Hack on an existing gzip implementation and serialize its decompressor state every, say, 1 megabyte of compressed data. Then to do random access, deserialize the decompressor state and read from the megabyte boundary. This seems hard, especially since I'm working with Java and I couldn't find a pure-java gzip implementation :( Re-compress the file in chunks of 1Mb and do same as above. This has the disadvantage of doubling the required disk space. Write a simple parser of the gzip format that doesn't do any decompressing and only detects and indexes block boundaries (if there even are any blocks: I haven't yet read the gzip format description)

    Read the article

  • How can I parse a C header file with Perl?

    - by Alphaneo
    Hi, I have a header file in which there is a large struct. I need to read this structure using some program and make some operations on each member of the structure and write them back. For example I have some structure like const BYTE Some_Idx[] = { 4,7,10,15,17,19,24,29, 31,32,35,45,49,51,52,54, 55,58,60,64,65,66,67,69, 70,72,76,77,81,82,83,85, 88,93,94,95,97,99,102,103, 105,106,113,115,122,124,125,126, 129,131,137,139,140,149,151,152, 153,155,158,159,160,163,165,169, 174,175,181,182,183,189,190,193, 197,201,204,206,208,210,211,212, 213,214,215,217,218,219,220,223, 225,228,230,234,236,237,240,241, 242,247,249}; Now, I need to read this and apply some operation on each of the member variable and create a new structure with different order, something like: const BYTE Some_Idx_Mod_mul_2[] = { 8,14,20, ... ... 484,494,498}; Is there any Perl library already available for this? If not Perl, something else like Python is also OK. Can somebody please help!!!

    Read the article

  • What method should be used for searching this mysql dataset?

    - by GeoffreyF67
    I've got a mysql dataset that contains 86 million rows. I need to have a relatively fast search through this data. The data I'll be searching through is all strings. I also need to do partial matches. Now, if I have 'foobar' and search for '%oob%' I know it'll be really slow - it has to look at every row to see if there is a match. What methods can be used to speed queries like this up? G-Man

    Read the article

  • How to map a Entity Data Model conceptual model property to a storage model column using the "Serial

    - by codekaizen
    I have a conceptual model in EDM where one of the entities has a property which is essentially a big value object whose properties aren't really useful as columns in the datamodel. I'd like to apply the Serialized LOB pattern to it so that I can fit it into a 192 byte binary column. How do I map this in the EDM v4? Is it even possible at this time? Actually, is it possible in any ORM?

    Read the article

  • Database over 2GB in MongoDB

    - by configurator
    We've got a file-based program we want to convert to use a document database, specifically MongoDB. Problem is, MongoDB is limited to 2GB on 32-bit machines (according to http://www.mongodb.org/display/DOCS/FAQ#FAQ-Whatarethe32bitlimitations%3F), and a lot of our users will have over 2GB of data. Is there a way to have MongoDB use more than one file somehow? I thought perhaps I could implement sharding on a single machine, meaning I'd run more than one mongod on the same machine and they'd somehow communicate. Could that work?

    Read the article

  • How may I scroll with vim into a big file ?

    - by Luc M
    Hello, I have a big file with thousands of lines of thousands of characters. I move the cursor to 3000th character. If I use PageDown or <CTRL>-D, the file will scroll but the cursor will come back to the first no-space character. There's is an option to set to keep the cursor in the same column after a such scroll ? I have the behavior with gvim on Window, vim on OpenVMS and Cygwin. Regards

    Read the article

  • Process xml-like log file queue

    - by Zsolt Botykai
    Hi all, first of all: I'm not a programmer, never was, although had learn a lot during my professional carreer as a support consultant. Now my task is to process - and create some statistics about a constantly written and rapidly growing XML like log file. It's not valid XML, because it does not have a proper <root> element, e.g. the log looks like this: <log itemdate="somedate"> <field id="0" /> ... </log> <log itemdate="somedate+1"> <field id="0" /> ... </log> <log itemdate="somedate+n"> <field id="0" /> ... </log> E.g. I have to count all the items with field id=0. But most of the solutions I had found (e.g. using XPath) reports an error about the garbage after the first closing </log>. Most probably I can use python (2.6, although I can compile 3.x as well), or some really old perl version (5.6.x), and recently compiled xmlstarlet which really looks promising - I was able to create the statistics for a certain period after copying the file, and pre- & appending the opening and closing root element. But this is a huge file and copying takes time as well. Isn't there a better solution? Thanks in advance!

    Read the article

  • Mysql: create index on 1.4 billion records

    - by SiLent SoNG
    I have a table with 1.4 billion records. The table structure is as follows: CREATE TABLE text_page ( text VARCHAR(255), page_id INT UNSIGNED ) ENGINE=MYISAM DEFAULT CHARSET=ascii The requirement is to create an index over the column text. The table size is about 34G. I have tried to create the index by the following statement: ALTER TABLE text_page ADD KEY ix_text (text) After 10 hours' waiting I finally give up this approach. Is there any workable solution on this problem? UPDATE: the table is unlikely to be updated or inserted or deleted. The reason why to create index on the column text is because this kind of sql query would be frequently executed: SELECT page_id FROM text_page WHERE text = ?

    Read the article

  • Cloud HUGE data storage options?

    - by ToughPal
    Hi, Does anyone have a good suggestion on how to do video recording? We have a camera that can record and then stream live video to a server. So this means we can have 1000's of cameras sending data 24X7 for recording. We will store data for over 7 / 14 / 30 days depending on the package. Per day if a camera is sending data to the server then it will store 1.5GB. So that means there is a traffic of 1.5GB / day / camera Total monthly 45GB / month / camera (Data + bandwidth for one camera) Please let me know the most cost effective way to get this data stored? Thanks!

    Read the article

  • Why does FastCGI not work well with Ruby on Rails?

    - by Jian Lin
    It is said that FastCGI doesn't work well with Ruby on Rails deployment. Why is that? In previous experience, something either works quite well or it might be fundamentally wrong. So if FastCGI is a viable solution, why is it not reliable with RoR? Does FastCGI work well with most any language / frameworks?

    Read the article

  • what changes when your input is giga/terabyte sized?

    - by Wang
    I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny. I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. For example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind. What other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)

    Read the article

  • Perl - Reading .txt files line-by-line and using compare function (printing non-matches only once)

    - by Kurt W
    I am really struggling and have spent about two full days on this banging my head against receiving the same result every time I run this perl script. I have a Perl script that connects to a vendor tool and stores data for ~26 different elements within @data. There is a foreach loop for @data that breaks the 26 elements into $e-{'element1'), $e-{'element2'), $e-{'element3'), $e-{'element4'), etc. etc. etc. I am also reading from the .txt files within a directory (line-by-line) and comparing the server names that exist within the text files with what exists in $e-{'element4'}. The Problem: Matches are working perfectly and only printing one line for each of the 26 elements when there is a match, however non-matches are producing one line for every entry within the .txt files (37 in all). So if there are 100 entries (each entry having 26 elements) stored within @data, then there are 100 x 37 entries being printed. So for every non-match in the: if ($e-{'element4'} eq '6' && $_ =~ /$e-{element7}/i) statement below, I am receiving a print out saying that there is not a match. 37 entries for the same identical 26 elements (because there are 37 total entries in all of the .txt files). The Goal: I need to print out only 1 line for each unique entry (a unique entry being $e-{element1} thru $e-{element26}). It is already printing one 1 line for matches, but it is printing out 37 entries when there is not a match. I need to treat matches and non-matches differently. Code: foreach my $e (@data) { # Open the .txt files stored within $basePath and use for comparison: opendir(DIRC, $basePath . "/") || die ("cannot open directory"); my @files=(readdir(DIRC)); my @MPG_assets = grep(/(.*?).txt/, @files); # Loop through each system name found and compare it with the data in SC for a match: foreach(@MPG_assets) { $filename = $_; open (MPGFILES, $basePath . "/" . $filename) || die "canot open the file"; while(<MPGFILES>) { if ($e->{'element4'} eq '6' && $_ =~ /$e->{'element7'}/i) { ## THIS SECTION WORKS PERFECTLY AND ONLY PRINTS MATCHES WHERE $_ ## (which contains the servernames (1 per line) in the .txt files) ## EQUALS $e->{'element7'}. print $e->{'element1'} . "\n"; print $e->{'element2'} . "\n"; print $e->{'element3'} . "\n"; print $e->{'element4'} . "\n"; print $e->{'element5'} . "\n"; # ... print $e->{'element26'} . "\n"; } else { ## **THIS SECTION DOES NOT WORK**. FOR EVERY NON-MATCH, THERE IS A ## LINE PRINTED WITH 26 IDENTICAL ELEMENTS BECAUSE ITS LOOPING THRU ## THE 37 LINES IN THE *.TXT FILES. print $e->{'element1'} . "\n"; print $e->{'element2'} . "\n"; print $e->{'element3'} . "\n"; print $e->{'element4'} . "\n"; print $e->{'element5'} . "\n"; # ... print $e->{'element26'} . "\n"; } # End of 'if ($e->{'element4'} eq..' statement } # End of while loop } # End of 'foreach(@MPG_assets)' } # End of 'foreach my $e (@data)' I think I need something to identical unique elements and define what fields make up a unique element but honestly I have tried everything I know. If you would be so kind to provide actual code fixes, that would be wonderful because I am headed to production with this script quite soon. Also. I am looking for code (ideally) that is very human-readable because I will need to document it so others can understand. Please let me know if you need additional information.

    Read the article

  • what's the best (most effecient) way in asp .net to return a whole page into tabbed content?

    - by ijjo
    what i want to do is every time i click on a tab, the content area is replaced by pretty much a whole new page. i don't want a full page load so i want to do it in ajax, but i'm used to sending back small jason data via page methods. i'm not sure how i would construct a whole new page and return that via ajax and i would like to simply assign the whole content returned to a div and be done with it. what's the best way to do this with the least amount of overhead (i know there are some inefficient ways the scriptmanager does ajax)? or is it better to load the tabbed content in an iframe? fyi i'm already using jquery to call lightweight pagemethods on my asp net page and that works great.

    Read the article

  • Errors when attempting to update source files, Server 2012R2 (errors 80073701 and 14081)

    - by jeremy
    I have a Windows Server 2012R2 machine that I installed with Server Core, and then decided that I wanted to switch to GUI. I'll make the long story short: I ran windows updates, and now the source files are older/out of sync with the operating system, and I need to update the source files. Here are a couple of articles that outline how this is supposed to work: http://blog.coretech.dk/kaj/why-i-cant-convert-my-windows-server-2012-r2-core-to-gui/ http://blogs.technet.com/b/joscon/archive/2012/11/14/how-to-update-local-source-media-to-add-roles-and-features.aspx I have followed these instructions, but the updates are not successfully updating the source. I get errors like: "An error occurred - Package_for_KB29671203 Error: 0x80073701, Error: 14081, The referenced assembly could not be found." or "add-windowspackage failed. error code = 0x80073701, add-windowspackage: the referenced assembly could not be found" I've extensively searched for help on those error codes related to Server 2012 and windows updates, but my google-fu is failing me. I am using windows update packages found in c:\Windows\SoftwareDistribution\Download How can I get these updates to bring my source files up to current? Thanks!

    Read the article

  • ESXi 4.0 - cannot copy files

    - by Peter
    I am unable to copy files or make directories on my installation of VMWare ESXi 4.0. I have done so in the past (copied an iso onto a datastore). But something has changed and I have no idea what. I cannot copy using the datastore browser (get a dialog saying "Expected a PUT_FILE_DONE message. Got SESSION_COMPLETE"). I cannot create a directory through datastore browser (get a dialog saying "Cannot complete file creation operation"). When I ssh to the ESXi server I cannot create files or folders under /vmfs/volumes. But I can manipulate files elswhere (including /vmfs). Here are the permissions for the directories (I am logged in as root). ~ # ls -lh /vmfs/volumes/ drwxr-xr-t 1 root root 1.2k Sep 3 12:19 4a76f260-36b7eb85-c3b3-0024e8314929 drwxr-xr-x 1 root root 8 Jan 1 1970 4a76f261-d6190a9e-3b89-0024e8314929 drwxr-xr-t 1 root root 1.4k Sep 22 10:38 4a76f262-4ac21f0a-6bc1-0024e8314929 l--------- 0 root root 1.9k Jan 1 1970 Hypervisor1 - c42ce27f-eb8d7f70-7f70-0e7a85e8edc4 l--------- 0 root root 1.9k Jan 1 1970 Hypervisor2 - bbf1477b-4aec1d8c-caa5-5e8720bebd85 l--------- 0 root root 1.9k Jan 1 1970 Hypervisor3 - efd8efe3-03bc1cbf-15e0-080efd9e7379 drwxr-xr-x 1 root root 8 Jan 1 1970 bbf1477b-4aec1d8c-caa5-5e8720bebd85 drwxr-xr-x 1 root root 8 Jan 1 1970 c42ce27f-eb8d7f70-7f70-0e7a85e8edc4 l--------- 0 root root 1.9k Jan 1 1970 datastore1 - 4a76f260-36b7eb85-c3b3-0024e8314929 l--------- 0 root root 1.9k Jan 1 1970 datastore2 - 4a76f262-4ac21f0a-6bc1-0024e8314929 drwxr-xr-x 1 root root 8 Jan 1 1970 efd8efe3-03bc1cbf-15e0-080efd9e7379 ~ # touch /vmfs/foo.txt ~ # touch /vmfs/volumes/foo.txt touch: /vmfs/volumes/foo.txt: Operation not permitted I've googled and found nothing helpful. Does anyone out there have an idea as to what is going on? Thanks in Advance. Pete.

    Read the article

  • How to add an SSH user to my Ubuntu 12 server to upload PHP files

    - by user229209
    I have an Ubuntu 12 VPS and wanted to create a user account to upload and download my PHP code. So when logged in as root I created a user "chris" and then created a directory /var/www/chris I want "chris" to be able to upload and run files to the /var/www/chris directory. Permissions for the chris dir look like this: drwxrwxr-x 2 root chris 4096 Aug 20 03:35 chris As root I created a sample file called abc.php and put it in the chris dir. It worked fine when I test it in a browser. I logged in as chris and uploaded a file called 1234.php. That did not work. I just got a blank PHP page. The code was identical in both files. So it is not the code. The permissions now look like this: -rw-r--r-- 1 root chris 59 Aug 20 03:34 1234.php -rw-r--r-- 1 root root 49 Aug 20 03:21 abc.php How do I alow the "chris" user to upload files and get them to work?

    Read the article

  • Windows system restore deletes various executables and *.js files. How does it decide which files to delete?

    - by Leftium
    I restored my system from a Windows System Restore point. It solved some issues I was having, but introduced other strange problems (like my optical drive disappeared). One thing that surprised me was several files from my Web2Py installation were deleted: the executables and *.js files; possibly some others (like favicon.ico). I did not expect this because Web2Py is basically a portable, standalone application. You just unzip it and run the executable inside, so nothing should be registered with Windows. My question is: what files does Windows system restore delete, and how does it decide this? I'm just wondering what other files I'm missing and if there's a way to get restore them (without rolling back the restore point). Perhaps it scans for certain files types (like exe, js, ico, dll) with a creation date that was after the restore point creation date? Some other people who experienced a similar problem: Dropbox: Lost Files User files missing after run system restore. update: I found some more references on how Windows System Restore works: Understanding how System Restore in Windows Vista treats executable files Why Vista's System Restore is Dangerous and What to do About it

    Read the article

  • Change the default program for a filetype to something not in "Program Files" in Windows Vista

    - by Carson Myers
    I'm trying to make my python scripts run in python 2.6 by default when run from the command line. This is paired with adding certain scripts to the PATH variable and .py to PATHEXT for convenience. But I'll be damned if I can get the file type association to work. In the default programs dialog (found in control panel) I find .py, and click "Change Program." This gives me the same dialog as clicking "Open with..." on a file's context menu. I search for python. Tell it to use python, but it doesn't add it to the list of programs I am allowed to use. I tried making a shortcut to python in Program Files, but that won't work either. If I copy python into a folder in Program Files, then that works. But why can't I just point it at C:\python26\python.exe (which is in the PATH variable) in the first place? Is there a way around this, or do I have to just reinstall python into Program Files?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >