Search Results

Search found 89549 results on 3582 pages for 'large file support'.

Page 91/3582 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • osx bash grep - finding search terms in a large file with one single line

    - by unsynchronized
    Is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term, even if there is only one "line" in a very large text file? Ok, this should be easy. Famous last words. I'm not that familiar with grep, but it seems it is mainly used to filter out lines in the input that contain search terms. I have a very large json file that I downloaded that i want to search for a particular term. before you click the link - it's over 244MB so be warned - it is from the internet wayback machine and contains lists of zip files of archived photos. i am trying to find mine. Their web interface is broken, so i found the json file that they make public here - it's the last one on the list. when i grep looking for my username, it finds it, but proceeds to dump that line to the console. the problem is that line is 244MB long, and it's the only line in the file. i tried using less, but could not get that to do much - it's very slow, and seems to have the same issue. is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term?

    Read the article

  • iis7 large worker process request queue creating process blocking aspnet.config & machine.config amended (bottleneck)

    - by scott_lotus
    ASP.net 2.0 app .net 2.0 framework IIS7 I am seeing a large queue of "requests" appear under the "worker process" option. State recorded appear to be Authenticate Request and Execute Request Handles more than anything else. I have amended aspnet.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727 (32 bit path and 64 bit path) to include: maxConcurrentRequestsPerCPU="50000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="50000" I have amended machine.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727\CONFIG (32 bit and 64 bit path) to include: autoConfig="true" maxIoThreads="100" maxWorkerThreads="100" minIoThreads="50" minWorkerThreads="50" minFreeThreads="176" minLocalRequestFreeThreads="152" Still i get the issue. The issue manifestes itself as a large number of processes in the Worker Process queue. Number of current connections to the website display 500 when this issue occurs. I dont think i have seen concurrent connections over 500 without this issue occurring. Web application slows as the request block. Refreshing the app pool resolves for a while (as expected) as the load is spread between the two pools. Application pool in question FIXED REQUEST have been set to refresh on 50000. Thank you for any help. Scott quick edit to say hmm, my develeopers are telling me the project was built with .net 3.5 framework. Looking at C:\Windows\Microsoft.NET\Framework64\v3.5 there does not appear to be a ASPNET.CONFIG or a MACHINE.CONFIG .... is there a 3.5 equivalent ? after a little searching apparenetly 3.5 uses the 2.0 framework files that 3.5 is missing. So back to the original question , where is my bottleneck ?

    Read the article

  • Load Sharing Regarding Large Websites

    - by JHarley1
    Hello, I have a question regarding Load Sharing for large websites. My Understanding: So if you have a website that has millions of fits a day you will need to have an architecture that can support this sort of pressure. You can either do one or two things: Invest in a single large server that has huge amounts of processing power, memory and storage (such as Microsoft's TerraServer). Spread the load of your website across a number of machines. Let me tackle the second approach, so you have a collection of machines all running Web Server Software and all having access to identical copies of the websites pages. You can either spread the load across these machines using a cyclic pattern in a DNS or you can use a Load Ballancing Switch. The advantages of this approach is: - Redundancy - servers can fail and the others would "pick up the slack" - Incremental - the ability to easily add new machines to this set-up. My Question's Is there a Virtual approach to this issue of load balancing now? If the website runs from a database - is there still only a single copy of the database? If a user had a session running on one Server (e.g. they had gone to www.example.org and had been assigned to Server 2 - were they had created a session) if they refreshed the website (and were allocated Server 3) would they still have their session? What are the other disadvantages associated with Load Balancing? Many Thanks, J

    Read the article

  • How do I import large sql file to local LAMP (xampp) environment

    - by mraslton
    I have used Linux to import a large mysql dump file (into a new database), but am new to how the process works in a local LAMP environment using xampp, as xampp does not support SSH. I've dowloaded the large_dump_file.sql from the Linux server to my local system. I'm using Windows XP and have used xampp to setup LAMP. I am able to access the local_database via phpMyAdmin, but the dump file is too large to import using that app. I'm trying to import the file via the command prompt, but so far with no success. At the prompt: cd .. cd .. cd xampp cd mysql cd bin I've found that mysqlimport is used to import .csv and .txt files, and mysql is used to import .sql files, but can't find documentation as to whether or not to use the -u -p options so I've tried many variations of the command with no luck. What would be the proper command? I've modified the hosts, virtual-hosts conf, and apache config files. Do I need to change any other config files on my local system? Thanks.

    Read the article

  • Shrinking a large transaction log on a full drive

    - by Sam
    Someone fired off an update statement as part of some maintenance which did a cross join update on two tables with 200,000 records in each. That's 40 trillion statements, which would explain part of how the log grew to 200GB. I also did not have the log file capped, which is another problem I will be taking care of server wide - where we have almost 200 databases residing. The 'solution' I used was to backup the database, backup the log with truncate_only, and then backup the database again. I then shrunk the log file and set a cap on the log. Seeing as there were other databases using the log drive, I was in a bit of a rush to clean it out. I might have been able to back the log file up to our backup drive, hoping that no other databases needed to grow their log file. Paul Randal from http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx Under no circumstances should you delete the transaction log, try to rebuild it using undocumented commands, or simply truncate it using the NO_LOG or TRUNCATE_ONLY options of BACKUP LOG (which have been removed in SQL Server 2008). These options will either cause transactional inconsistency (and more than likely corruption) or remove the possibility of being able to properly recover the database. Were there any other options I'm not aware of?

    Read the article

  • ffmpeg - h264 to xvid creates large file

    - by fatnic
    I'm trying to use ffmpeg to convert a h264/aac video file to an xvid/mp3 file so I can play it in my ultra-cheap media player. At the moment the converted video file is TWICE the size of the original mp4. Is there any way to get a smaller file size without loosing too much quality? Even a drop to -qmin 1 is pretty awful! The command i'm using is ffmpeg -i input.mp4 -vcodec libxvid -sameq -acodec libmp3lame -ab 128k -ac 2 output.avi And the ffmpeg output is Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4' Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 Duration: 01:34:27.69, start: 0.000000, bitrate: 1520 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x304 [PAR 1:1 DAR 45:19], 1387 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc Stream #0.1(und): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Output #0, avi, to 'output.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0(und): Video: mpeg4, yuv420p, 720x304 [PAR 1:1 DAR 45:19], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1(und): Audio: libmp3lame, 48000 Hz, stereo, s16, 128 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1

    Read the article

  • How to configure a large mtu (linux)

    - by Somejan
    I have a gigabit ethernet connection from my laptop to my router, and a working ipv6 connection to the internet. I can receive very large packets from sites on the internet, with sizes up to at least 10000 bytes (according to wireshark). (edit: turns out to be linux's 'generic receive offload') However, when trying to send anything, my local computer fragments at just below 1500 bytes for ipv6. (On ipv4, I can send tcp packets to the internet of at least 1514 bytes, I can ping with packets up to the configured mtu of 6128 but they are blackholed.) I'm on ubuntu 12.04. I have configured an mtu for my eth0 of 6128 (the maximum it accepts), both using ip link set dev eth0 mtu 6128 and in the NetworkManager applet gui, and restarted the connection. ip link show eth0 shows the 6128 mtu is indeed set. ip -6 route shows that none of the paths the kernel knows about have an mtu set. I can ping over ipv4 with packets up to 6128 bytes (though I don't get responses), but when I do ping6 myrouter -c3 -s1500 -Mdo I get error replies from my own computer saying that the packets are too large and the mtu is 1480. I have confirmed with Wireshark that nothing is put on the wire, and the replies are indeed generated by my own computer. So, how do I get my computer to use the larger mtu?

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

  • Export-Mailbox - fails with large folders

    - by grojo
    I am trying to move messages from a rather large mailbox to an archive mailbox. However I run into errors all the time. the command I am executing is Export-Mailbox -Identity MAILBOX_FROM -TargetMailbox ARCHIVE -TargetFolder ARCHIVE_FOLDER -StartDate 2009-02-01 -EndDate 2009-02-28 -DeleteContent -Confirm:$false I can copy/move some messages, but run into frequent "an unknown error has occurred" (statuscode -1056749164) I run the console as administrative user, and all permissions are set right, as far as I can tell. I've restricted the start and end dates in case the number of messages moved/deleted should create problems. Anything I am missing in my setup? Corrupted messages? Over-limit message sizes? Update: What I've learnt so far, is that folder with more than approx 3000 messages will generate errors. If mail retention is set (default 30 days), Export-Mailbox will scan all messages whether these were deleted in previous runs or not, and date restriction to limit number of messages will not work. To avoid errors, I've switched off deleted message retention for the mailbox, and moved the messages from one large folder to multiple folders, and moved these one by one...

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • Copy past speed very slow for a large number of files on Windows [closed]

    - by Arno2501
    I've run the following test I've created a folder containing 15'000 files of 400 bytes using this batch : @ECHO off SET times=15000 FOR /L %%i IN (1,1,%times%) DO ( fsutil file createnew filename%%i.txt 400 ) then I copy past it on my Windows Computer using this command : robocopy LargeNumberOfFiles\ LargeNumberOfFiles2\ After it has completed I can see that the transfer rate was 915810 Bytes/sec this is less than 1 MB/s. It took me several seconds to copy 7 MBytes Please note that this is very slow. I've tried the same with a folder with a single file of 50 Mbytes and the transfer rate is 1219512195 Bytes/sec. (yeah GB/s) instantaneous. Why copying large number of files take so much time - ressources on a windows filesystem ? Please note that I've tried to do the same on a linux system which runs on the same computer in a virtual machine (vmware player) with ext3 filesystem. I use the cp command and the copy is instantaneous ! Please also note the following : no antivirus I've tested that behaviour on multiple windows computers (always ntfs) i always get comparable results (transfer rate under 1MB/s avg 7-8 seconds to copy 7 MBytes) I've tested on multiple linux ext3 system the copy is always instantaneous for that amount (15000 files of 400 bytes) The question is about understanding what makes windows filesystem so slow to copy large number of files compared to a linux one for instance.

    Read the article

  • Using sed for introducing newline after each > in a +1 gigabyte large one-line text file

    - by wasatz
    I have a giant text file (about 1,5 gigabyte) with xml data in it. All text in the file is on a single line, and attempting to open it in any text editor (even the ones mentioned in this thread: http://stackoverflow.com/questions/159521/text-editor-to-open-big-giant-huge-large-text-files ) either fails horribly or is totally unusable due to the text editor hanging when attempting to scroll. I was hoping to introduce newlines into the file by using the following sed command sed 's/>/>\n/g' data.xml > data_with_newlines.xml Sadly, this caused sed to give me a segmentation fault. From what I understand, sed reads the file line-by-line which would in this case mean that it attempts to read the entire 1,5 gig file in one line which would most certainly explain the segfault. However, the problem remains. How do I introduce newlines after each in the xml file? Do I have to resort to writing a small program to do this for me by reading the file character-by-character?

    Read the article

  • Binary search in a sorted (memory-mapped ?) file in Java

    - by sds
    I am struggling to port a Perl program to Java, and learning Java as I go. A central component of the original program is a Perl module that does string prefix lookups in a +500 GB sorted text file using binary search (essentially, "seek" to a byte offset in the middle of the file, backtrack to nearest newline, compare line prefix with the search string, "seek" to half/double that byte offset, repeat until found...) I have experimented with several database solutions but found that nothing beats this in sheer lookup speed with data sets of this size. Do you know of any existing Java library that implements such functionality? Failing that, could you point me to some idiomatic example code that does random access reads in text files? Alternatively, I am not familiar with the new (?) Java I/O libraries but would it be an option to memory-map the 500 GB text file (I'm on a 64-bit machine with memory to spare) and do binary search on the memory-mapped byte array? I would be very interested to hear any experiences you have to share about this and similar problems.

    Read the article

  • Find a specific couple of lines of code from large git repo

    - by mustISignUp
    So i remember that i once did something in another project and (later removed it), that could be useful now. Thanks to some other SO post i managed to search for a half remembered string.. git grep halfRemeberedNameOfFunction $(git log -g --pretty=format:%h) and Yay! got some results 2d0bcde:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 65fc672:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 24f2858:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 252e3a5:path/to/project/file.c: result = halfRemeberedNameOfFunction( data, args ); b58bc0b:path/to/project/file.c: result = _halfRemeberedNameOfFunction( data, options ); dce8d9d:path/to/project/file.c: result = halfRemeberedNameOfFunction( data, moreData ); But how do i get that file at one of those revisions? Many thanks

    Read the article

  • Stopping by the Store

    - by [email protected]
    Registrants Get Online Savings on Oracle Products Have you heard about the Oracle Store? It's the one-stop online shop for buying Oracle software and support at significant savings. Better yet, when you register for Oracle OpenWorld 2010 by April 30, you can get an additional 10% off your next purchase. The 10% discount applies to a one-time "click and buy" checkout, so load up as many items as you can. To get started, you'll need to visit the Oracle OpenWorld registration page to get more information about the promotion, including the promo code and link. It's another great way to turn your early bird registration into a long-term gain for your organization.

    Read the article

  • E-Business Suite Technology Stack Roadmap (April 2010) Now Available

    - by Steven Chan
    Keeping up with our E-Business Suite technology stack roadmap can be challenging.  Regular readers of this blog know that we certify new combinations and versions of Oracle products with the E-Business Suite every few weeks.  We also update our certification plans and roadmap as new third-party products like Microsoft Office 2010 and Firefox are announced or released.  Complicating matters further, various Oracle products leave Premier Support or are superceded by more-recent versions.This constant state of change means that any static representation of our roadmap is really a snapshot in time, and a snapshot that might begin to yellow and fade fairly quickly.  With that caveat in mind, here's this month's snapshot that I presented at the OAUG/Collaborate 2010 conference in Las Vegas last week:EBS Technology Stack Roadmap (April 2010)

    Read the article

  • Should I force users to update an application?

    - by Brian Green
    I'm writing an application for a medium sized company that will be used by about 90% of our employees and our clients. In planning for the future we decided to add functionality that will verify that the version of the program that is running is a version that we still support. Currently the application will forcequit if the version is not among our supported versions. Here is my concern. Hypothetically, in version 2.0.0.1 method "A" crashes and burns in glorious fashion and method "B" works just fine. We release 2.0.0.2 to fix method A and deprecate version 0.1. Now if someone is running 0.1 to use method B they will be forced to update to fix something that isn't an issue for them right now. My question is, will the time saved not troubleshooting old, unsupported versions outweigh the cost in usability?

    Read the article

  • How come ASUS sells Ubuntu installed 1225Cs, when we dont have drivers for gma3600?

    - by sumit_gt
    I recently purchased a ASUS 1225C cedar trail netbook. It didn't come preinstalled with Ubuntu like it comes in some markets. I installed Ubuntu 12.04 LTS on it. While Ubuntu works just fine, I am bothered by the lack of proper graphics support because of intel gma 3600. I am aware that this is not a problem of Ubuntu. My question is this: How does ASUS manage to sell these netbooks in other markets with Ubuntu preinstalled? How does ASUS manage to get everything to work? And if they can, why aren't I able to do the same? link: For more on Asus 1225C As you can see, they show Ubuntu working perfectly in the screenshots. Is this false advertising?

    Read the article

  • Should I pay my developers for bugs fixes for a project or work that's still in progress?

    - by Wanda Pebbs
    We are working with a group of developers on a project. The project is still in progress (not completed) and these developers charge us for time spent on fixing bugs on codes that were not written valid in the first place. I understand that we should pay for changes/new requests if any, but not bugs fixes for a work in progress. We also understand that once the assignment is being deployed to the live site, we may be liable for bugs fixes that may arise after a support period is being exhausted. The question now is, is it appropriate for such charges to be levied upon us while the project is still in progress?

    Read the article

  • CRU??????????????????????????

    - by aiyoku
    ACS T&M ?????(CRU) ?????? ????·??????·?????·????(ACS)???ACS T&M ?????(CRU) ????????????? ???????????????·???????(Premier Support for Systems)???????????????????????????????????????????·?????·??????????????? ????????????????????????????????????? ?????????????????????????·?????????????????????????????????ID??????????????????????????·??????????????? ???????????????????????????????????????????????·??????????????? ??:??????·??????????????????????????????????? (1664356.1)   ????????????????????? ???????????????????????????·????????????????????Oracle Hardware??Systems ????·?????????????????"Delivery Method Chart: Replacement Parts and Installation of Integrated Software Updates" ???????????????? ??????????????????????????????????????????????????????(?????????)?????????????????? ??????????????????????????????? Oracle System Handook ????????? ????Oracle System Handook????????????????????? ??????????? Full Component List ?????????????? Manufacturing Part # ?????????7039990 [C]??????Manufacturing Part #???? [C] ????????????????? [C] ????????????????????????????? ??????????????????????????????????·???????????·???????????????????????????????????????????????????????????????????????   ??????? ????????????????????????????????????????????

    Read the article

  • File path for J2ME FileConnection?

    - by Kilnr
    Hi, I'm writing a MIDlet which needs to write file. I'm using FileConnection from JSR-75 to accomplish this. The intention is to have this MIDlet runnning on as much devices as possible (all MIDP 2.0 devices with JSR-75 support, ideally). On several emulators and an HTC Touch Pro2, I can perfectly use the following code to get the root of the filesystem: Enumeration drives = FileSystemRegistry.listRoots(); String root = (String) drives.nextElement(); String path = "file:///" + root; However, on a Nokia S60 5th edition emulator, trying to open a FileConnection to this path throws a java.lang.SecurityException. Apparently S60 devices do not allow connections to the root of the filesystem. I realise I can use something like System.getProperty("fileconn.dir.photos"), but that isn't supported on all devices either. So, my actual question: what is the best approach to get a path to create a FileConnection with, that allows for maximum portability? Thanks. Edit: I suppose I could iterate over all the roots in the Enumeration, and check for a writable one, but that's hardly optimal for two reasons. First, there aren't necessarily any writable roots. Second, this could be the phone memory or a memory card, so the storage method wouldn't be consistent across devices, which is rather ugly.

    Read the article

  • Iterating arrays in a batch file

    - by dboarman-FissureStudios
    I am writing a batch file (I asked a question on SU) to iterate over terminal servers searching for a specific user. So, I got the basic start of what I'm trying to do. Enter a user name Iterate terminal servers Display servers where user is found (they can be found on multiple servers now and again depending on how the connection is lost) Display a menu of options Iterating terminal servers I have: for /f "tokens=1" %%Q in ('query termserver') do (set __TermServers.%%Q) Now, I am getting the error... Environment variable __TermServers.SERVER1 not defined ...for each of the terminal servers. This is really the only thing in my batch file at this point. Any idea on why this error is occurring? Obviously, the variable is not defined, but I understood the SET command to do just that. I'm also thinking that in order to continue working on the iteration (each terminal server), I will need to do something like: :Search for /f "tokens=1" %%Q in ('query termserver') do (call Process) goto Break :Process for /f "tokens=1" %%U in ('query user %%username%% /server:%%Q') do (set __UserConnection = %%C) goto Search However, there are 2 things that bug me about this: Is the %%Q value still alive when calling Process? When I goto Search, will the for-loop be starting over? I'm doing this with the tools I have at my disposal, so as much as I'd like to hear about PowerShell and other ways to do this, it would be futile. I have notepad and that's it. Note: I would continue this line of questions on SuperUser, except that it seems to be getting more into programming specifics.

    Read the article

  • File extensions and MIME Types in .NET

    - by Marc Climent
    I want to get a MIME Content-Type from a given extension (preferably without accessing the physical file). I have seen some questions about this and the methods described to perform this can be resumed in: Use registry information. Use urlmon.dll's FindMimeFromData. Use IIS information. Roll your own MIME mapping function. Based on this table, for example. I've been using no.1 for some time but I realized that the information provided by the registry is not consistent and depends on the software installed on the machine. Some extensions, like .zip don't use to have a Content-Type specified. Solution no.2 forces me to have the file on disk in order to read the first bytes, which is something slow but may get good results. The third method is based on Directory Services and all that stuff, which is something I don't like much because I have to add COM references and I'm not sure it's consistent between IIS6 and IIS7. Also, I don't know the performance of this method. Finally, I didn't want to use my own table but at the end seems the best option if I want a decent performance and consistency of the results between platforms (even Mono). Do you think there's a better option than using my own table or one of other described methods are better? What's your experience?

    Read the article

  • File Format DOS/Unix/MAC code sample

    - by mac
    I have written the following method to detemine whether file in question is formatted with DOS/ MAC, or UNIX line endings. I see at least 1 obvious issue: 1. i am hoping that i will get the EOL on the first run, say within first 1000 bytes. This may or may not happen. I ask you to review this and suggest improvements which will lead to hardening the code and making it more generic. THANK YOU. new FileFormat().discover(fileName, 0, 1000); and then public void discover(String fileName, int offset, int depth) throws IOException { BufferedInputStream in = new BufferedInputStream(new FileInputStream(fileName)); FileReader a = new FileReader(new File(fileName)); byte[] bytes = new byte[(int) depth]; in.read(bytes, offset, depth); a.close(); in.close(); int thisByte; int nextByte; boolean isDos = false; boolean isUnix = false; boolean isMac = false; for (int i = 0; i < (bytes.length - 1); i++) { thisByte = bytes[i]; nextByte = bytes[i + 1]; if (thisByte == 10 && nextByte != 13) { isDos = true; break; } else if (thisByte == 13) { isUnix = true; break; } else if (thisByte == 10) { isMac = true; break; } } if (!(isDos || isMac || isUnix)) { discover(fileName, offset + depth, depth + 1000); } else { // do something clever } }

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >