Search Results

Search found 44076 results on 1764 pages for 'large text'.

Page 90/1764 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Edit 100MB+ file

    - by Majid Fouladpour
    I have captured some traffic with Wireshark and saved the result as a file. The file has 3 sections now: request headers response headers response body The response body is to become an flv file, but now everything is saved as a single file. So I need a way to delete the first two sections from the file, but the problem is that the file is very big (over a thousand mega bytes). I have tried to open it with gedit, but no matter how long I wait, gedit hangs and remains unresponsive until I kill it. What tool can I use to edit this big file easily?

    Read the article

  • How do I create a file?

    - by Shira
    Hi, I'm following instructions about Installing and setting TFTPD in Ubuntu. It asks to Create /etc/xinetd.d/tftp and put this entry: service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = /tftpboot disable = no } What does it mean to create? Is there any command I need to type? And "put this entry" - it means to type in one line all the lines above? I don't know Linux & I need your help please.

    Read the article

  • Reflective practice in programming using keystroke playback

    - by Graham
    I'm thinking of applying Reflective Practice to improving my programming skills. To that end, I want to be able to watch myself writing code. In general, what is a good method for applying Reflective Practice to the craft of programming? In particular, if it's a good idea, is there an editor that records keystrokes then plays them back at a later time - possibly running the keys together without delays, or replaying at a 2x/4x/8x accelerated rate? Screencasting with RecordMyDesktop is an option, but has downsides of waiting for encoding and ending up with a big video file instead of a list of keystrokes.

    Read the article

  • Open .doc file from my website in browser

    - by seth
    What's the best way to give the end-user of my web application the ability to open, edit and save (via browser) word documents that are stored in my database? I have this working by doing an html conversion of the file (via Aspose Words) but this method seems not even close to flawless and i'm trying to improve this. Is integrating with google docs possible/good? Their edition seems awesome and very powerful. I can't use any Microsoft Word objects (and this is even discouraged by MS). EDIT: The application is developed in .NET and currently uses the .NET framework 2.0. However, as this is fairly obsolete the idea is to restart from scratch and therefore use the 4.0 framework and C# or VB.

    Read the article

  • Ububtu 13.04 Rename Computer

    - by Sourabh
    How can I rename my computer? Renaming it in /etc/hosts and /etc/hostname does something weird. Before renaming it, I am able to open these files via sublime using sudo subl /etc/hosts but when I rename my Computer (using nano) and open any of these files using subl, I get this message: No protocol specified (sublime_text:20071): Gtk-WARNING **: cannot open display: :0.0 So I guess renaming in the above files is not the only thing I have to do. PS: If I rename using sublime, after renaming one of the files, I get same message when I try to open other file

    Read the article

  • Denali CTP3 - Semantic Search 2 (Lots of documents)

    - by sqlartist
    Hi again, I thought I would improve on the previous post by actually putting a decent about of content into the Filetable - this time I used the opensource DMOZ Health document repository which contains 5,880 files inside 220 folders. The files are all html and are pretty small in size. The entire document collection is about 120Mb unzipped and 30Mb zipped. If any one is interested in testing this collection drop me a note and I will upload the dmoz_health repository archive to Skydrive. This time...(read more)

    Read the article

  • SQLSaturday #60 - Cleveland Rocks!

    - by Mike C
    Looking forward to seeing all the DBAs, programmers and BI folks in Cleveland at SQLSaturday #60 tomorrow! I'll be presenting on (1) Intro to Spatial Data and (2) Build Your Own Search Engine in SQL. I've reworked the Spatial Data presentation based on feedback from previous SQLSaturday events and added more sample code. I also expanded the Build Your Own Search Engine code samples to demonstrate additional FILESTREAM functionality. See you all tomorrow! A little road music, please! http://www.youtube.com/watch?v=vU0JpyH1gC...(read more)

    Read the article

  • How do you make slides for programming talks?

    - by Yuvi Masory
    I've given a few talks recently and I have not found a good way to make slides. Here are a few desirable characteristics for programming slides: They're slides. A standard emacs buffer won't do it. They have syntax highlighting for code. They support basic formatting, like font size and color and bullets. No fancy animations needed. The only animation I desire is one-by-one appearance of bullets. So far I have considered: Microsoft Office - out of the question for Linux users. OpenOffice.org - too much for my needs, code formatting/highlighting needs to be done externally and pasted in. On the plus side supports bullets, bullet-by-bullet animation, and font formatting. Emacs - Supports all the code formatting but I haven't found a slides mode that lets me transition from one chunk to another. HTML5 - I once made slides using html5rocks as a template. It supports everything, but is too hard and time-consuming the "throw together" a few slides before a minor talk. Also the html5-only features may not work on the podium computer's installed browser. Any suggestions for programs/techniques for making code-centric presentations?

    Read the article

  • SQLSaturday #60 - Cleveland Rocks!

    - by Mike C
    Looking forward to seeing all the DBAs, programmers and BI folks in Cleveland at SQLSaturday #60 tomorrow! I'll be presenting on (1) Intro to Spatial Data and (2) Build Your Own Search Engine in SQL. I've reworked the Spatial Data presentation based on feedback from previous SQLSaturday events and added more sample code. I also expanded the Build Your Own Search Engine code samples to demonstrate additional FILESTREAM functionality. See you all tomorrow! A little road music, please! http://www.youtube.com/watch?v=vU0JpyH1gC...(read more)

    Read the article

  • How large should I make root, home, and swap partitions?

    - by starcorn
    Hello, I have a laptop with win7 installed. I have now made a 60gb partition which I want to install ubuntu into. The question I have, before I do the installation, is how large each of the root, swap, and home partition should be? I have read some place that root could be as small as 8GB, but isn't that too small? Since I guess beside ubuntu all the softwares installed will reside there as well? And I think I'm going to set my swap to be 2GB large. My main concern is how large should the root partition should be. I'm mainly going to use ubuntu for programming and browse the web.

    Read the article

  • Applying the Knuth-Plass algorithm (or something better?) to read two books with different length and amount of chapters in parallel

    - by user147133
    I have a Bible reading plan that covers the whole Bible in 180 days. For the most of the time, I read 5 chapters in the Old Testament and 1 or 2 (1.5) chapters in the New Testament each day. The problem is that some chapters are longer than others (for example Psalm 119 which is 7 times longer than a average chapter in the Bible), and the plan I'm following doesn't take that in count. I end up with some days having a lot more to read than others. I thought I could use programming to make myself a better plan. I have a datastructure with a list of all chapters in the bible and their length in number of lines. (I found that the number of lines is the best criteria, but it could have been number of verses or number of words as well) I then started to think about this problem as a line wrap problem. Think of a chapter like a word, a day like a line and the whole plan as a paragraph. The "length" of a word (a chapter) is the number of lines in that chapter. I could then generate the best possible reading plan by applying a simplified Knuth-Plass algorithm to find the best breakpoints. This works well if I want to read the Bible from beginning to end. But I want to read a little from the new testament each day in parallel with the old testament. Of course I can run the Knuth-Plass algorithm on the Old Testament first, then on the New Testament and get two separate plans. But those plans merged is not a optimal plan. Worst-case days (days with extra much reading) in the New Testament plan will randomly occur on the same days as the worst-case days in the Old Testament. Since the New Testament have about 180*1.5 chapters, the plan is generally to read one chapter the first day, two the second, one the third etc... And I would like the plan for the Old Testament to compensate for this alternating length. So I will need a new and better algorithm, or I will have to use the Knuth-Plass algorithm in a way that I've not figured out. I think this could be a interesting and challenging nut for people interested in algorithms, so therefore I wanted to see if any of you have a good solution in mind.

    Read the article

  • Multilingual Publishing Pack (MLP): make a link to the corresponding page in the another language?

    - by lyle
    I am helping to build a bilingual website using MLP on TextPattern. It's trivial to put a link to the top level page of another language, but how to put a link to the current page in another language? Eg. /en/contact should link to /de/kontakt (the same article in another language). I'm sure there are some variables somewhere that I could put into the template that would be filled with the correct links.

    Read the article

  • How much time do you need in between large projects?

    - by Mattio
    You've launched a large project at work, something that's been in progress and taken up large chunks of your life for more than 6 months. The post-launch triage is over. Tech support isn't calling you every hour because they don't know how to troubleshoot an issue. Your hours drop from 60+/wk to whatever is normal in your organization (which is hopefully less than 60+!). How much time do you (or your team) need before the next large project begins? I was asked this question at work and I think the ideal minimum is two weeks -- one week to clear your desk and inbox + one week to clear your head and remember what it's like to have a life outside of work. I'd frankly acknowledge that just being asked this question is a huge boon to work/life balance. But I do think it's possible to go too long in between.

    Read the article

  • Blank lines between sourcecode [closed]

    - by manix
    I'm so confused with a strange behaviour. Actually I have edited some php files remotely with my PhpDesigner8 (a php editor). Everything goes right, but when my teammates reopen the files that I have edited the source code have blank lines like below: class AdminController extends Controller { function __construct() { parent::__construct(); if (!$this->session->can_admin()) { show_error('Solo para administradores.'); } $this->load->library('backend'); } } Instead of class AdminController extends Controller { function __construct() { parent::__construct(); if (!$this->session->can_admin()) { show_error('Solo para administradores.'); } $this->load->library('backend'); } } Did you have experience these kinds of problems?

    Read the article

  • URL Encryption vs. Encoding

    - by hozza
    At the moment non/semi sensitive information is sent from one page to another via GET on our web application. Such as user ID or page number requested etc. Sometimes slightly more sensitive information is passed such as account type, user privileges etc. We currently use base64_encode() and base64_decode() just to de-humanise the information so the end user is not concerned. Is it good practice or common place for a URL GET to be encrypted rather than simply PHP base64_encoded? Perhaps using something like, this: $encrypted = base64_encode(mcrypt_encrypt(MCRYPT_RIJNDAEL_256, md5($key), $string, MCRYPT_MODE_CBC, md5(md5($key)))); $decrypted = rtrim(mcrypt_decrypt(MCRYPT_RIJNDAEL_256, md5($key), base64_decode($encrypted), MCRYPT_MODE_CBC, md5(md5($key))), "\0"); Is this too much or too power hungry for something as common as the URL GET.

    Read the article

  • uploading large xml to WCF REST service -> 400 Bad request

    - by glenn.danthi
    I am trying to upload large xml files to a REST service... I have tried almost all methods specified on stackoverflow on google but I still cant find out where I am going wrong....I cannot upload a file greater than 64 kb!.. I have specified the maxRequestLength : <httpRuntime maxRequestLength="65536"/> and my binding config is as follows : <bindings> <webHttpBinding> <binding name="RESTBinding" maxBufferSize="67108864" maxReceivedMessageSize="67108864" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> </binding> </webHttpBinding> </bindings> In my C# client side I am doing the following : WebRequest request = HttpWebRequest.Create(@"http://localhost.:2381/RepositoryServices.svc/deviceprofile/AddDdxml"); request.Credentials = new NetworkCredential("blah", "blah"); request.Method = "POST"; request.ContentType = "application/xml"; request.ContentLength = byteArray.LongLength; using (Stream postStream = request.GetRequestStream()) { postStream.Write(byteArray, 0, byteArray.Length); } There is no special configuration done on the client side...

    Read the article

  • C# average function for large numbers without overflow exception

    - by Ron Klein
    .NET Framework 3.5. I'm trying to calculate the average of some pretty large numbers. For instance: using System; using System.Linq; class Program { static void Main(string[] args) { var items = new long[] { long.MaxValue - 100, long.MaxValue - 200, long.MaxValue - 300 }; try { var avg = items.Average(); Console.WriteLine(avg); } catch (OverflowException ex) { Console.WriteLine("can't calculate that!"); } Console.ReadLine(); } } Obviously, the mathematical result is 9223372036854775607 (long.MaxValue - 200), but I get an exception there. This is because the implementation (on my machine) to the Average extension method, as inspected by .NET Reflector is: public static double Average(this IEnumerable<long> source) { if (source == null) { throw Error.ArgumentNull("source"); } long num = 0L; long num2 = 0L; foreach (long num3 in source) { num += num3; num2 += 1L; } if (num2 <= 0L) { throw Error.NoElements(); } return (((double) num) / ((double) num2)); } I know I can use a BigInt library (yes, I know that it is included in .NET Framework 4.0, but I'm tied to 3.5). But I still wonder if there's a pretty straight forward implementation of calculating the average of integers without an external library. Do you happen to know about such implementation? Thanks!!

    Read the article

  • Structuring projects & dependencies of large winforms applications in C#

    - by Benjol
    UPDATE: This is one of my most-visited questions, and yet I still haven't really found a satisfactory solution for my project. One idea I read in an answer to another question is to create a tool which can build solutions 'on the fly' for projects that you pick from a list. I have yet to try that though. How do you structure a very large application? Multiple smallish projects/assemblies in one big solution? A few big projects? One solution per project? And how do you manage dependencies in the case where you don't have one solution. Note: I'm looking for advice based on experience, not answers you found on Google (I can do that myself). I'm currently working on an application which has upward of 80 dlls, each in its own solution. Managing the dependencies is almost a full time job. There is a custom in-house 'source control' with added functionality for copying dependency dlls all over the place. Seems like a sub-optimum solution to me, but is there a better way? Working on a solution with 80 projects would be pretty rough in practice, I fear. (Context: winforms, not web) EDIT: (If you think this is a different question, leave me a comment) It seems to me that there are interdependencies between: Project/Solution structure for an application Folder/File structure Branch structure for source control (if you use branching) But I have great difficulty separating these out to consider them individually, if that is even possible. I have asked another related question here.

    Read the article

  • OutOfMemoryException, large Private Data

    - by Captain Comic
    Hello, In previous series: http://stackoverflow.com/questions/2543648/outofmemoryexception-stack-size-is-huge-large-number-of-threads I have a .net windows service that consumes a lot of memory. The GC heap is not big. Also the stack size is not big. What is big is something called a private data. Also I can see in task manager that my application consumes a lot something that taskmanager calls a handle. My application consumes 2326 handles. I believe that these handles are some windows handles that occupy private data. I can see that this private data is occupied by blocks marked as Thread Environment Block. What is that? Screenshot of my application memory usage by VMMap Screenshot of my application memory usage by Task Manager UPDATE I run ProcessExplorer. I have two instances of my service running at the moment. I can see that they consume a lot of virtual memory for Gen2 GC. This look suspicios. Also total reserved for GC Heap size is the same for two processes.

    Read the article

  • Starting out NLP - Python + large data set

    - by pencilNero
    Hi, I've been wanting to learn python and do some NLP, so have finally gotten round to starting. Downloaded the english wikipedia mirror for a nice chunky dataset to start on, and have been playing around a bit, at this stage just getting some of it into a sqlite db (havent worked with dbs in the past unfort). But I'm guessing sqlite is not the way to go for a full blown nlp project(/experiment :) - what would be the sort of things I should look at ? HBase (.. and hadoop) seem interesting, i guess i could run then im java, prototype in python and maybe migrate the really slow bits to java... alternatively just run Mysql.. but the dataset is 12gb, i wonder if that will be a problem? Also looked at lucene, but not sure how (other than breaking the wiki articles into chunks) i'd get that to work.. What comes to mind for a really flexible NLP platform (i dont really know at this stage WHAT i want to do.. just want to learn large scale lang analysis tbh) ? Many thanks.

    Read the article

  • Viewing a large-resolution VNC server through a small-resolution viewer in Ubuntu

    - by Madiyaan Damha
    I have two Ubuntu computers, one with a large screen resolution (1920x1600) that is running default ubuntu vnc server. I have another computer that has a resolution of about 1200x1024 that I use to vnc into the server (I use the default ubuntu vnc viewer). Now everything works fine except there are annoying scrollbars in the viewer because the server's desktop resolution is so much higher than the viewer's. Is there a way to: 1) Scale the server's desktop down to the viewer's resolution. I know there will be a loss of image quality, but I am willing to try it out. This should be something like how windows media player or vlc scales down the window (and does some interpolation of pixels). 2) Automatically shrink the resolution of the server to the client's when I connect and scale the resolution back when I disconnect. This seems like a less attractive solution. 3) Any other solution that gurus out there use? I am sure someone has experienced this before (annoying scroll bars) so there must be a solution out there. Thanks,

    Read the article

  • WPF performance on scaling a large scene

    - by Mark
    I have a full screen app that I want to be able to zoom in on certain areas. I have the code working fine, but I notice that when I get closer in, the zoom in animation (which animates the ScaleTransform.ScaleX and ScaleTransform.ScaleY properties on a Parent canvas) starts to jerk down a little and the frame rate suffers. Im not using any BitmapEffects or anything, and ideally I would like my scene to get more complicated than it currently already is. The scene is quite large, 1980x1024, this is a requirement and cannot be changed. The current layout is like this: <Canvas x:name="LayoutRoot"> <Canvas x:Name="ContainerCanvas"> <local:MyControl x:Name="c1" /> <!-- numerous or ther controls and elements that compose the scene --> </Canvas> </Canvas> The code that zooms in just animates the RenderTransform of the ContainerCanvas, which in tern, scales its children which gives the desired effect. However, Im wondering if I need to swap out the ContainerCanvas for a ViewBox or something like that? Ive never really worked with ViewBox/Viewport controls before in WPF can they even help me out here? Smooth zooming is a huge requirement of the client and I must get this resolved. All ideas are welcome Thanks a lot Mark

    Read the article

  • Selecting a Java framework for large application w/ only ONE user

    - by Bijan
    I am building a large application that will be hosted on an AWS server. I'm trying to select a web framework for assisting me with code organization, template design, and generally presentation aspects. Here are some points of consideration: Require security/login/user authentication. I may add the ability in the future to allow more than just an administrator to access the web app, but it is not a public facing website. AJAX support would be helpful. There are a couple widgets that I don't want to recreate. One is a tree object, where the user can expand/contract items in the list, can create new branches, add/edit objects. This would be better off in some dynamic view rather than all done in ugly html. Generally, this is just to provide the application with a face for control, management, and monitoring. Having an easier time adding buttons, CSS, AJAX widgets are great additions though, but not the primary purpose. I'm considering: Wicket Spring Seam GWT Stripe and the list goes on, as I'm sure you all know. I originally planned on using GWT, but then started to feel that GWT didn't cover my primary needs. I could be wrong about this, but there seems to be a lot of support for GWT AND Wicket/Spring. All of this 'getting lost in java frameworks' got me thinking outside the java realm for a framework that would suit my needs that was a clear option, like: JRuby/Rails Jython/Django Groovy/Grails Guice (just throwing this in there... I don't clearly understand the main purposes of all these frameworks. It doesn't seem like DInjection is something I need for a single purpose application) Thanks as always. This community makes Googling for esoteric programming information an order of magnitude better.

    Read the article

  • NSKeyedArchiver on NSArray has large size overhead

    - by redguy
    I'm using NSKeyedArchiver in Mac OS X program which generates data for iPhone application. I found out that by default, resulting archives are much bigger than I expected. Example: NSMutableArray * ar = [NSMutableArray arrayWithCapacity:10]; for (int i = 0; i < 100000; i++) { NSString * s = [NSString stringWithFormat:@"item%06d", i]; [ar addObject:s]; } [NSKeyedArchiver archiveRootObject:ar toFile: @"NSKeyedArchiver.test"]; This stores 10 * 100000 = 1M bytes of useful data, yet the size of the resulting file is almost three megabytes. The overhead seems to be growing with number of items in the array. In this case, for 1000 items, the file was about 22k. "file" reports that it is a "Apple binary property list" (not the XML format). Is there an simple way to prevent this huge overhead? I wanted to use the NSKeyedArchiver for the simplicity it provides. I can write data to my own, non-generic, binary format, but that's not very elegant. Also, aggregating the data into large chunks and feeding these to the NSKeyedArchiver should work, but again, that kinda beats the point of using simple&easy&ready to use archiver. Am I missing some method call or usage pattern that would reduce this overhead?

    Read the article

  • How to delete a large cookie that causes Apache to 400

    - by jakemcgraw
    I've come across an issue where a web application has managed to create a cookie on the client, which, when submitted by the client to Apache, causes Apache to return the following: HTTP/1.1 400 Bad Request Date: Mon, 08 Mar 2010 21:21:21 GMT Server: Apache/2.2.3 (Red Hat) Content-Length: 7274 Connection: close Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Size of a request header field exceeds server limit.<br /> <pre> Cookie: ::: A REALLY LONG COOKIE ::: </pre> </p> <hr> <address>Apache/2.2.3 (Red Hat) Server at www.foobar.com Port 80</address> </body></html> After looking into the issue, it would appear that the web application has managed to create a really long cookie, over 7000 characters. Now, don't ask me how the web application was able to do this, I was under the impression browsers were supposed to prevent this from happening. I've managed to come up with a solution to prevent the cookies from growing out of control again. The issue I'm trying to tackle is how do I reset the large cookie on the client if every time the client tries to submit a request to Apache, Apache returns a 400 client error? I've tried using the ErrorDocument directive, but it appears that Apache bails on the request before reaching any custom error handling.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >