Search Results

Search found 89549 results on 3582 pages for 'large file support'.

Page 59/3582 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • How to estimate effort required to convert a large codebase to another language/platform

    - by Justin Branch
    We have an MFC C++ program with around 200,000 lines of code in it. It's pretty much finished. We'd like to hire someone to convert it to work for Macs, but we are not sure how to properly estimate a reasonable timeline for this project. What techniques can we use to estimate what it would take to convert this project to work on a Mac? Also, is there anything in particular we should be watching out for specific to this sort of conversion?

    Read the article

  • Viewport.Unproject - Checking if a model intersects a large sprite

    - by Fibericon
    Let's say I have a sprite, drawn like this: spriteBatch.Draw(levelCannons[i].texture, levelCannons[i].position, null, alpha, levelCannons[i].rotation, Vector2.Zero, scale, SpriteEffects.None, 0); Picture levelCannon as being a laser beam that goes across the entire screen. I need to see if my 3d model intersects with the screen space inhabited by the sprite. I managed to dig up Viewport.Unproject, but that seems to only be useful when dealing with a single point in 2d space, rather than an area. What can I do in my case?

    Read the article

  • Business knowledge in a large financial org?

    - by Victor
    As a programmer working in the finance industry, I recently got a project that is a hedge fund adminsitrative application(used to calculate NAVs, allocate assets etc.) From a business point of view this is a good thing. When we think of our 'next' project, typically the impulse is to think in terms of technology. e.g: 'I want to work on a project that uses SOA/cloud etc etc.' I am interested to know if anyone while career planning also takes into account the business aspect of a future project. i.e. what the application does. So does anybody ever think like this : 'I wish to work on a trading system so I can understand capital markets better.' instead of 'I want to work on a project that uses SOA/cloud etc etc.' I say this because it appears to me in the finance domain, for senior position, good business knowledge pays well. So maybe a guy that knows more business but maybe not so much latest technologies is at an advantage? The rockstar programmer seems more suited for an aggressive startup. Particularly big old finance orgs rarely invest in tech just for the 'cool factor'. No?

    Read the article

  • Getting started on Large Projects

    - by Mercfh
    So I just graduated from my College with a B.S. in Comp. Science (although it was a good school, we're the only accredited CS department in our state.....for w/e that means lol) I feel like im a decent programmer, not amazing....but not terrible. Anyways I got my first job about 2 weeks ago, it's a pretty entry level job: firmware development/tester (I know I know people look down on testers...but I gotta start somewhere). Anyways there isn't a whole lot of coding to be had right now (mostly simple stuff) but here soon I have the option of helping out with development (which is what I want to do) Thing is....I have NEVER worked on a huge project. I mean in school sure we had "group" projects but nothing really big. So I'm not super familiar with HUGE classes and such (main language was C++)....Is this something I'll just get used to with time? Some fellow students were used to that with internships and such...but I never got that chance. My job was mostly a "one man job" kinda thing. Mostly little things. Plus in class we never did huge projects anyways. So how do you guys I guess "plan" out these things? Do you use a whiteboard and plan out classes and such....or what. Also...another worry of mine is that I have to use google......ALOT for examples of code, because sometimes I just don't get how something works. Is this normal? It makes me feel sorta.....stupid I guess. I mean "technically" i've had 4-5 years coding experience......but it really only feels like I had 2 years of REAL experience. If that makes any sense? Thanks

    Read the article

  • Extract large zip file (50G) on Mac OS X

    - by chingjun
    I was trying to move the files to another hard drive. So I archived all my photos in one large zip file using the Mac OS X built-in compress function. But the file failed to extract. I've tried many programs, but none of the programs I tried were able to extract the file. I've tried Mac OS X's extract utility, Stuffit Expander, 7zip (command line), all failed. Mac's archive utility and Stuffit don't seem to support large files, and 7zip's command line version gave an error stating unsupported archive. I have no luck in Windows too as many of my files have Chinese filenames, and couldn't extract to the correct name under Windows. Could anyone please suggest some programs that can support large files, can handle files compressed using Mac OS X's compress function, and can support UTF-8 filename? With or without GUI is fine. Thank you in advance.

    Read the article

  • Static pages for large photo album

    - by Phil P
    I'm looking for advice on software for managing a largish photo album for a website. 2000+ pictures, one-time drop (probably). I normally use MarginalHack's album, which does what I want: pre-generate thumbnails and HTML for the pictures, so I can serve without needing a dynamic run-time, so there's less attack surface to worry about. However, it doesn't handle pagination or the like, so it's unwieldy for this case. This is a one-time drop for pictures from a wedding, with a shared usercode/password for distribution to the guests; I don't wish to put the pictures in a third-party hosting environment. I don't wish to use PHP, simply because that's another run-time to worry about, I might relent and use something dynamic if it's Python or Perl based (as I can maintain things written in those). I currently have: Apache serving static files, Album-generated, some sub-directories to divide up the content to be a little more manageable. Something like Album but with pagination already handled would be great, but I'm willing to have something a little more dynamic, if it lets people comment or caption and store the extra data in something like an sqlite DB. I'd want something light-weight, not a full-blown CMS with security updates every three months. I don't want to upload pictures of other peoples' children into a third-party free service where I don't know what the revenue model is. (For my site: revenue is none, costs out of pocket). Existing server hosting is *nix, Apache, some WSGI. Client-side I have MacOS. Any advice?

    Read the article

  • Why is my content database so large?

    - by PeterBrunone
    If your SharePoint site collection hasn't grown, but your content database has, the most likely culprit is versioning.  If a list -- or worse, a library -- has versioning enabled, the default is to keep every single one.  That means that every time someone edits and checks in a document, its storage footprint increases by the size of the document (and probably a little more).The solution?  It could be a bit painful, but you'll need to go back into each library and restrict the number of versions to keep (three is sufficient for most uses, but your needs may vary).  I suggest keeping only major versions as well, since minor versions are really just stopping points on the way to a published document.Of course if you have a real business need to keep all those versions around, then you'll want to look into an archiving solution that will take the old versions out of the content database but still make them available if necessary.

    Read the article

  • Why are Back In Time snapshots so large?

    - by Chethan S.
    I just backed up the contents of my home partition onto my external hard drive using Back In Time. I browsed to the backed up contents in the external drive and under properties it showed me the size as 9.6 GB. As I read that in next snapshots I create, Back In Time does not backup everything but creates hard links for older contents and saves newer contents, I wanted to test it. So I copied two small files into my home partition and ran 'Take Snapshot' again. The operation completed within a minute - first it checked previous snapshot, assessed the changes, detected two new files and synced them. After this when I browsed to the backed up contents, I was surprised to see the newer and older backup taking up 9.6 GB each. Isn't this a waste of hard drive space? Or did I interpret something wrongly?

    Read the article

  • Speed up your Silverlight Debugging for large projects

    - by Aligned
    I'm working on a 5+ year old ASP.NET project that has 74+ projects and we've been adding new Silverlight applications to run in the ASP.NET page islands. My machine at work isn't the most powerful, so I find myself waiting a lot for the whole thing to build. I'm using Visual Studio 2010, so that takes up a lot of resources as well. This causes me to get distracted and I start looking at the news... I need to combat that more :-). I can't get a new machine, that's up to someone else, so I've found a few tricks to help. 1. Only build the Silverlight project you're working with. This will build all referenced projects (you can see these by right clicking and clicking Project Dependencies) and package a new XAP (you can see all the actions in your output build window). Then refresh your page with the Silverlight app and it's up-to-date. 2. I was working with a co-worker (thanks Jordan) who was using the the Debug -> attach to processes window. In the Attach to: row there is a "Select..." button. In the dialog, click "Debug these code types:" and select Silverlight. Hit ok. Then all you need to do is find your process (you might need to click the refresh button). I'm usually debugging in IE, so I select the first one and push "i" on the keyboard. That brings me to the IE windows open. Find the one with type of Silverlight, x86. It is usually directly above one with type of x86 that has the page title for "title". Click attach and watch your output window spit out messages about loading debug symbols and your breakpoints enabled (if this doesn't happen you chose the wrong process, hit stop and try again). Now you can debug the client code as normal, server code requires a full F5 or attaching to the correct process. To improve this even further, bind the menu item to a key stroke. I chose ctrl + x, x. (Tools -> Options -> Keyboard, search for Debug.AttachToProcess, set the shortcut keys globaly and assign). Most of the time I build the project, then hit ctrl + x, x then i, then enter and I'm debugging. The process I want is usually the first IE in the list.

    Read the article

  • How to get experience in large scale databases?

    - by Justin
    I have written applications that are very small scale and the code I write works fine for them. But I have often wondered how the server side code I write would scale up from 100s of queries per day to millions. Also when looking at possible jobs/projects, people are often looking for developers with experience in this sort of high traffic database design so I would at least like to be able to say, I havent gotten to work on a project that was this popular, but I at least have tried to simulate it. Are there tools or frameworks that can generate a lot of traffic or at least simulate what would happen with traffic on different orders of magnitude so I could get some practice writing optimized code for higher traffic applicaitons?

    Read the article

  • How effectively "sell" a good design in large meetings

    - by User1
    Many times I have witnessed a sad tragedy. Here's what happens: A team design review for a new project. I see a simple design that has quite a few holes. I casually mention the holes and ways to avoid them. The warnings are ignored with comments like "that 'never' happen in real life" Eventually the things that "will 'never' happen" happen An emergency team design review for a broken project. So what do I do? Copping the "I told you so" attitude is not going to win friends and influence people. Sometimes years go by and the comments from step 3 are forgotten anyway. I definitely don't want to be the annoying pest reminding the world of the gotchas. I often sit back and watch the Titanic sail off to Europe. It's frustrating to see bad designs move forward. It's also frustrating that I can't seem to convince others of the pending peril of the current path. I do worst on team meetings where everyone has different ways of understanding different terms. Also, egos tend to win of reason and thought. I'm looking for good tactics to convince groups people to use some new and complicated ideas.

    Read the article

  • Am I sending large amounts of data sensibly?

    - by Sofus Albertsen
    I am about to design a video conversion service, that is scalable on the conversion side. The architecture is as follows: Webpage for video upload When done, a message gets sent out to one of several resizing servers The server locates the video, saves it on disk, and converts it to several formats and resolutions The resizing server uploads the output to a content server, and messages back that the conversion is done. Messaging is something I have covered, but right now I am transferring via FTP, and wonder if there is a better way? is there something faster, or more reliable? All the servers will be sitting in the same gigabit switch or neighboring switch, so fast transfer is expected.

    Read the article

  • Display large amount of data to client through pagination

    - by ebram tharwat
    I have a web application in which i need to show a big number of data or records for clients. Now i 'll use pagination but i was wondering should I: Load all the data once then pagination, sorting and sarching 'll be easy..But it 'll takes big time(using local DB it takes up to 9 sec.) Or each time i show new page(from the pagination) i make a new request to server and then new request to DB to get the next records..But then what if the client click on Prev button, i 'll make a new request to get data that I had previously..Should i cach data that are loaded before and how if that's good technique? So load all data once or make a new request every time i need data that maybe have been loaded before. I'm using ASP.NET MVC SPA with durandaljs and knockoutjs

    Read the article

  • Continuing to code on large projects

    - by user3487347
    I am a hobbyist programmer, and I've started many medium - sized projects to work on just by myself. These include games, a raytracer, physics simulations etc. By the time these projects get to a certain size (around 5000 lines), I begin to slow down in adding features to the program. This is not because of a lack of ideas of what to implement in a program, but rather a struggle in how to go about it. In particular, I'm afraid of breaking what I already have working in order to implement a new feature. I've tried using version control like Git and Subversion, but these seem unnecessary when you are a one man team. I simply have a folder of "versions" of my program, one for each major change I make. How do I keep coding past this 5000 line mark? What about the 50000 line mark?

    Read the article

  • Canonicalization Within Large Corporations

    As you probably have heard, canonicalization is one of the latest announcements sweeping the SEO industry. What you probably haven't heard, however, is how to pronounce it - or how it will effect your website. I'll do my best to explain canonicalization in layman's terms, but forgive me if it's still tough to understand.

    Read the article

  • Why does moving large folders take a lot of time?

    - by acidzombie24
    What can i do to fix this? Drop permission properties? I have a large folder with 100k files. I moved it into my archive folder and its taking forever to move. Why is that? I know on XP it takes <1sec but not on windows 7. I am sure its a permission thing, is there a way i can disable it and make it faster? -edit- I am moving the folder into another in the same drive/partition. In XP. AFAIK it just moves the folder file from one place to another. In windows 7, it seems like its touching something in every file when i move it.

    Read the article

  • How do I upload large (30MB) files via a web interface?

    - by Dan
    Because I'm stumped... The client needs to be able to upload large images to a library but the upload fails after 5-6MB (over my poor connection). It seems to be timing out as the filesize at fail isn't consistent. The setup is a form which is accepted by PHP. I've googled and played with php.ini and everything is set for big uploads and long timeouts. Platform is a dedicated windows server at GoDaddy. What's going wrong?

    Read the article

  • I overwrote a large file with a blank one on a linux server. Can I recover the existing file?

    - by user39234
    I came back to my machine, tried saving a file over ssh onto my linux server (CentOS). It failed. I wasn't interested in keeping any changes I had made so I closed my editor and reopened the file (over ssh). The save attempt wiped the file. I have made loads of changes to it since I last uploaded to revision control. Seeing as it has just wiped the file I assume the data is still there. It's just a text file (php), is there any way of recovering it?

    Read the article

  • I just deleted my backup file! How do I save it?

    - by Sammy
    I just accidentally deleted a backup file that I need to restore my system. It's an Acronis True Image TIB file. It was stored at H:\My backups and the name of the file was File_backup_2012-10-18.tib. I did a quick scan with Recuva 1.43.623 and it found the file using the recovery wizard, but it was unable to recover it. The "state" of the file is "unrecoverable". So the resulting file is 0 byte. I am trying to do a deep scan with Recuva right now but it takes a lot of time. If it should fail, what other recovery option do I have? Is there any other good file recovery software that's free to use for home users? I do have a second copy of the whole system partition, but I needed this file backup copy because it is more up to date. That's the file, right there! But why is Recuva unable to recover it?

    Read the article

  • unable to delete file (jpeg)

    - by ile
    I implemented helper for showing thumbnails from here Next to thumbnail, there is delete link which calls this controller // // HTTP POST: /Photo/Delete/1 [AcceptVerbs(HttpVerbs.Post)] public ActionResult Delete(int id, string confirmButton) { var path = "~/Uploads/Photos/"; Photo photo = photoRepository.GetPhoto(id); if (photo == null) return View("NotFound"); FileInfo TheFile = new FileInfo(Server.MapPath(path + photo.PhotoID + ".jpg")); if (TheFile.Exists) { photoRepository.Delete(photo); photoRepository.Save(); TheFile.Delete(); } else return View("NotFound"); return View(); } If I disable showing thumbnails then the file is deleted. Otherwise it sends error: System.IO.IOException: The process cannot access the file 'C:\Documents and Settings\ilija\My Documents\Visual Studio 2008\Projects\CMS\CMS\Uploads\Photos\26.jpg' because it is being used by another process. I also don't know if my file delete function is properly written. Searching on the net, I see everyone uses File.Delete(TheFile);, which i'm unable to use and I use TheFile.Delete(); For File.Delete(TheFile); i get following error: Error 1 'System.Web.Mvc.Controller.File(string, string, string)' is a 'method', which is not valid in the given context C:\Documents and Settings\ilija\My Documents\Visual Studio 2008\Projects\CMS\CMS\Controllers\PhotoController.cs 109 17 CMS Am I missing something here? Thanks in advance

    Read the article

  • Looking for a web based, embeddable file manager

    - by Kristi H.
    I'm looking for a web based, embeddable file manager. I haven't found anything suitable through Google or the Stack Overflow archives. Does anyone know of a file manager that meets the following criteria? The file manager must… …be embeddable (e.g. Flash or a Java applet) …run from my server (no storing uploads in a remote location) …be licensed for business use (fees are okay) …let me disable uploads. Users shouldn't be able to upload files, only download them …allow me to tag files and/or place them in folders …let me disable authentication or handle it from the backend. My app already has a login system and I don't want to make users log in twice …allow users to select and download multiple files …have an attractive interface The file manager should… …have an external config file …display thumbnail previews for files …be skinnable …be actively developed or at least actively maintained Thanks for your help. I need a sophisticated file manager and I'd like to avoid writing it from scratch.

    Read the article

  • How to highlight the new created file in JTree

    - by newbie123
    I want to make it like when I click a button, it will create a new file. Then the jTree will highlight the new file. Below are my code. Currently I create new file, i will show the new file but no highlight the file. class FileTreeModel implements TreeModel { private FileNode root; public FileTreeModel(String directory) { root = new FileNode(directory); } public Object getRoot() { return root; } public Object getChild(Object parent, int index) { FileNode parentNode = (FileNode) parent; return new FileNode(parentNode, parentNode.listFiles()[index].getName()); } public int getChildCount(Object parent) { FileNode parentNode = (FileNode) parent; if (parent == null || !parentNode.isDirectory() || parentNode.listFiles() == null) { return 0; } return parentNode.listFiles().length; } public boolean isLeaf(Object node) { return (getChildCount(node) == 0); } public int getIndexOfChild(Object parent, Object child) { FileNode parentNode = (FileNode) parent; FileNode childNode = (FileNode) child; return Arrays.asList(parentNode.list()).indexOf(childNode.getName()); } public void valueForPathChanged(TreePath path, Object newValue) { } public void addTreeModelListener(TreeModelListener l) { } public void removeTreeModelListener(TreeModelListener l) { } } class FileNode extends java.io.File { public FileNode(String directory) { super(directory); } public FileNode(FileNode parent, String child) { super(parent, child); } @Override public String toString() { return getName(); } } jTree = new JTree(); jTree.setBounds(new Rectangle(164, 66, 180, 421)); jTree.setBackground(SystemColor.inactiveCaptionBorder); jTree.setBorder(BorderFactory.createTitledBorder(null, "", TitledBorder.LEADING, TitledBorder.TOP, new Font("Arial", Font.BOLD, 12), new Color(0, 0, 0))); FileTreeModel model = new FileTreeModel(root); jTree.setRootVisible(false); jTree.setModel(model); expandAll(jTree); public void expandAll(JTree tree) { int row = 0; while (row < tree.getRowCount()) { tree.expandRow(row); row++; } }

    Read the article

  • Read whole ASCII file into C++ std::string

    - by Arrieta
    Hello, I need to read a whole file into memory and place it in a C++ std::string. If I were to read it into a char, the answer would be very simple: std::ifstream t; int lenght; t.open("file.txt", "r"); // open input file t.seekg(0, std::ios::end); // go to the end length = t.tellg(); // report location (this is the lenght) t.seekg(0, std::ios::beg); // go back to the beginning buffer = new char[length]; // allocate memory for a buffer of appropriate dimension t.read(buffer, length); // read the whole file into the buffer t.close(); // close file handle // ... do stuff with buffer here ... Now, I want to do the exact same thing, but using a std::string instead of a char. I want to avoid loops, i. e., I don't want to: std::ifstream t; t.open("file.txt", "r"); std::string buffer; std::string line; while(t){ std::getline(t, line); // ... append line to buffer and go on } t.close() any ideas?

    Read the article

  • c++ File input/output

    - by Myx
    Hi: I am trying to read from a file using fgets and sscanf. In my file, I have characters on each line of the while which I wish to put into a vector. So far, I have the following: FILE *fp; fp = fopen(filename, "r"); if(!fp) { fprintf(stderr, "Unable to open file %s\n", filename); return 0; } // Read file int line_count = 0; char buffer[1024]; while(fgets(buffer, 1023, fp)) { // Increment line counter line_count++; char *bufferp = buffer; ... while(*bufferp != '\n') { char *tmp; if(sscanf(bufferp, "%c", tmp) != 1) { fprintf(stderr, "Syntax error reading axiom on " "line %d in file %s\n", line_count, filename); return 0; } axiom.push_back(tmp); printf("put %s in axiom vector\n", axiom[axiom.size()-1]); // increment buffer pointer bufferp++; } } my axiom vector is defined as vector<char *> axiom;. When I run my program, I get a seg fault. It happens when I do the sscanf. Any suggestions on what I'm doing wrong?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >