Search Results

Search found 10417 results on 417 pages for 'large'.

Page 7/417 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Drawing large 2D sidescroller level terrain

    - by Yar
    I'm a relatively good programmer but now that it comes to add some basic levels to my 2D game I'm kinda stuck. What I want to do: An acceptable, large (8000 * 1000 pixels) "green hills" test level for my game. What is the best way for me to do this? It doesn't have to look great, it just shouldn't look like it was made in MS paint with the line and paint bucket tool. Basically it should just mud with grass on top of it, shaped in some form of hills. But how should I draw it, I can't just take out the pencil tool and start drawing it pixel per pixel, can I?

    Read the article

  • Moving large website to new CMS - URL changes

    - by herrherr
    Hi, I was wondering if you have any tipps on the following situation. I'm going to move a large website to a new Content Management System, here are some details on the site: online news magazine with roughly 3,000 articles domain age: 10 years online in the current form since May 2010 indexed pages: ~10.000 percent of search engine traffic: under 10% Unfortunately a custom-tailored CMS was used for the site. The performance, reliability and SEO capabilites have been really bad, so we are moving to a new and proven open source CMS. All the articles will be kept as they are, but the URL structure as well as the structure of the HTML templates will be changed. What I wanted to do now is to actually create 301 redirects for all articles from the old to the new schema, i.e: Old: www.example.com/en/html/news/detail/title-of-the-article/ New: www.example.com/category/title-of-article.html Is this a proven way to do something like this? If not, can you recommend a way that has worked for you? Thanks :)

    Read the article

  • Handling cameras in a large scale game engine

    - by Hannesh
    What is the correct, or most elegant, way to manage cameras in large game engines? Or should I ask, how does everybody else do it? The methods I can think of are: Binding cameras straight to the engine; if someone needs to render something, they bind their own camera to the graphics engine which is in use until another camera is bound. A camera stack; a small task can push its own camera onto the stack, and pop it off at the end to return to the "main" camera. Attaching a camera to a shader; Every shader has exactly one camera bound to it, and when the shader is used, that camera is set by the engine when the shader is in use. This allows me to implement a bunch of optimizations on the engine side. Are there other ways to do it?

    Read the article

  • Lessons learnt in implementing Scrum in a Large Organization that has traditional values

    - by MarkPearl
    I recently had the experience of being involved in a “test” scrum implementation in a large organization that was used to a traditional project management approach. Here are some lessons that I learnt from it. Don’t let the Project Manager be the Product Owner First lesson learnt is to identify the correct product owner – in this instance the product manager assumed the role of the product owner which was a mistake. The product owner is the one who has the most to loose if the project fails. With a methodology that advocates removing the role of the project manager from the process then it is not in the interests of the person who is employed as a project manager to be the product owner – in fact they have the most to gain should the project fail. Know the time commitments of team members to the Project Second lesson learnt is to get a firm time commitment of the members on a team for the sprint and to hold them to it. In this project instance many of the issues we faced were with team members having to double up on supporting existing projects/systems and the scrum project. In many situations they just didn’t get round to doing any work on the scrum project for several days while they tried to meet other commitments. Initially this was not made transparent to the team – in stand up team members would say that had done some work but would be very vague on how much time they had actually spent using the blackhole of their other legacy projects as an excuse – putting up a time burn down chart made time allocations transparent and easy to hold the team to. In addition, how can you plan for a sprint without knowing the actual time available of the members – when I mean actual time, the exercise of getting them to go through all their appointments and lunch times and breaks and removing them from their time commitment helps get you to a realistic time that they can dedicate. Make sure you meet your minimum team sizes In a recent post I wrote about the difference between a partnership and a team. If you are going to do scrum in a large organization make sure you have a minimum team size of at least 3 developers. My experience with larger organizations is that people have a tendency to be sick more, take more leave and generally not be around – if you have a team size of two it is so easy to loose momentum on the project – the more people you have in the team (up to about 9) the more the momentum the project will have when people are not around. Swapping from one methodology to another can seem as waste to the customer It sounds bad, but most customers don’t care what methodology you use. Often they have bought into the “big plan upfront”. If you can, avoid taking a project on midstream from a traditional approach unless the customer has not bought into the process – with this particular project they had a detailed upfront planning breakaway with the customer using the traditional approach and then before the project started we moved onto a scrum implementation – this seemed as waste to the customer. We should have managed the customers expectation properly. Don’t play the role of the scrum master if you can’t be the scrum master With this particular implementation I was the “scrum master”. But all I did was go through the process of the formal meetings of scrum – I attended stand up, retrospectives and planning – but I was not hands on the ground. I was not performing the most important role of removing blockages – and by the end of the project there were a number of blockages “cropping up”. What could have been a better approach was to take someone on the team and train them to be the scrum master and be present to coach them. Alternatively actually be on the team on a fulltime basis and be the scrum master. By just going through the meetings of scrum didn’t mean we were doing scrum. So we failed with this one, if you fail look at it from an agile perspective As this particular project drew to a close and it became more and more apparent that it was not going to succeed the failure of it became depressing. Emotions were expressed by various people on the team that we not encouraging and enforced the failure. Embracing the failure and looking at it for what it is instead of taking it as the end of the world can change how you grow from the experience. Acknowledging that it failed and then focussing on learning from why and how to avoid the failure in the future can change how you feel emotionally about the team, the project and the organization.

    Read the article

  • Coherence Query Performance in Large Clusters

    - by jpurdy
    Large clusters (measured in terms of the number of storage-enabled members participating in the largest cache services) may introduce challenges when issuing queries. There is no particular cluster size threshold for this, rather a gradually increasing tendency for issues to arise. The most obvious challenges are that a client's perceived query latency will be determined by the slowest responder (more likely to be a factor in larger clusters) as well as the fact that adding additional cache servers will not increase query throughput if the query processing is not compute-bound (which would generally be the case for most indexed queries). If the data set can take advantage of the partition affinity features of Coherence, then the application can use a PartitionedFilter to target a query to a single server (using partition affinity to ensure that all data is in a single partition). If this can not be done, then avoiding an excessive number of cache server JVMs will help, as will ensuring that each cache server has sufficient CPU resources available and is also properly configured to minimize GC pauses (the most common cause of a slow-responding cache server).

    Read the article

  • How to speed up rsync/tar of large Maildir

    - by psusi
    I have a very large Maildir I am copying to a new machine ( over 100 BaseT ) with rsync. The progress is slow. VERY SLOW. Like 1 MB/s slow. I think this is because it is a lot of small files that are being read in an order that essentially is random with respect to where the blocks are stored on disk, causing a massive seek storm. I get similar results when trying to tar the directory. Is there a way to get rsync/tar to read in disk block order, or otherwise overcome this problem?

    Read the article

  • Steps to manage a large project [closed]

    - by l46kok
    Software development is an area where parallel development to its fullest form is very difficult to achieve, although you could get reasonably close with the right design. This is especially true for game development. That being said, if you are designing a game from scratch from engine to front end, what steps should be taken in order? How would you efficiently manage your project and your team? I'm asking because several people and I are interested in working on a relatively large project for learning purposes. Initially, we were going to use a proprietary engine like Unity, but since we wanted to learn how the engine works, we're going to start from bottom. I'd appreciate any suggestions that you guys can provide me.

    Read the article

  • Javascript frameworks for large development teams

    - by pllee
    My company is reevaluating what kind of web framework we want to use. We are currently using the Ext 4.0 framework but there are questions being raised that it may not be the right framework to use. I like what Ext has to offer (rich GUIs, data package and class system) are there other frameworks out there that are similar? Are there frameworks out there tailored to medium/large software companies? Info: Potentially 100's of developers converting thick client screens to the web. Data modeling is important and well as rich GUI support. Maintainability and uniformity across multiple products important as well. Any info is greatly appreciated.

    Read the article

  • Address search from large text file

    - by user73809
    Basically I want to develop a Address lookup(part of my project) using C# (and I can use SQL if necessary). I have a very large text file which have all the UK address and postcodes. Addresses needs to be looked up from this text file. For example, as soon as a user type in the first letter of the postcode it should list all the postcodes starting with that letter and as they type in further, the list should narrow down. Could you guys please how I can do this search very fast and list the results in a efficient way. If you guys know any methods and technologies pls do advice.

    Read the article

  • Handling large integers in python [migrated]

    - by Sushma Palimar
    I had written a program in python to find b such that a prime number p divides b^2-8. The range for b is [1, (p+1)/2]. For small integers it works, say only up to 7 digits. But not for large integers, say for p = 140737471578113. I get the error message for i in range (2,p1,1): MemoryError I wrote the program as #!/usr/bin/python3 p=long(raw_input('enter the prime number:')) p1=long((p+1)/2) for i in range (2,p1,1): s = long((i*i)-8) if (s%p==0): print i

    Read the article

  • CSC Enables IT Transformation for a Large Public Sector Health Agency with Middleware

    - by Tanu Sood
    CSC is a global leader in next-generation IT services and solutions. The company helps its clients achieve strong returns on their technology investments through its best-in-class industry solutions, domain expertise, and global scale. So, when CSC was tasked with an IT modernization project, the IT services leader relied on Oracle Fusion Middleware solutions to build a next-generation, service-oriented architecture environment to this large public-sector healthcare agency’s several thousand facilities. Catch this fantastic success story of how they enabled a secure, service-oriented architecture environment and a robust platform with interoperability and scalability that supports thousands of hospitals, and with the capacity to support 800,000 provider organizations and process millions of files during peak periods. Learn how Oracle Fusion Middleware can help your organization. 

    Read the article

  • Large Sprite Performance

    - by Iansen
    I've got a large Sprite generated using a set of vertices(x,y coordinates) and a bitmap pattern (using moveTo, lineTo, beginBitmapFill, endFill ...etc). It's about 15000 pixels wide and between 1500 - 2000 pixels high depending on the level -it's the terrain for a 2D game. My question is: what is the best way to display/move it on the stage - performance wise? Currently I'm just adding it to the stage as is...I get decent frame rate/ memory/ cpu usage but I want to optimize it for slower PCs. Any ideas? I've been reading a little about blitting but I'm not sure how to implement it in my case. Thanks.

    Read the article

  • Restructuring a large Chrome Extension/WebApp

    - by A.M.K
    I have a very complex Chrome Extension that has gotten too large to maintain in its current format. I'd like to restructure it, but I'm 15 and this is the first webapp or extension of it's type I've built so I have no idea how to do it. TL;DR: I have a large/complex webapp I'd like to restructure and I don't know how to do it. Should I follow my current restructure plan (below)? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? While it isn't relevant to the question, the actual code is on Github and the extension is on the webstore. The basic structure is as follows: index.html <html> <head> <link href="css/style.css" rel="stylesheet" /> <!-- This holds the main app styles --> <link href="css/widgets.css" rel="stylesheet" /> <!-- And this one holds widget styles --> </head> <body class="unloaded"> <!-- Low-level base elements are "hardcoded" here, the unloaded class is used for transitions and is removed on load. i.e: --> <div class="tab-container" tabindex="-1"> <!-- Tab nav --> </div> <!-- Templates for all parts of the application and widgets are stored as elements here. I plan on changing these to <script> elements during the restructure since <template>'s need valid HTML. --> <template id="template.toolbar"> <!-- Template content --> </template> <!-- Templates end --> <!-- Plugins --> <script type="text/javascript" src="js/plugins.js"></script> <!-- This contains the code for all widgets, I plan on moving this online and downloading as necessary soon. --> <script type="text/javascript" src="js/widgets.js"></script> <!-- This contains the main application JS. --> <script type="text/javascript" src="js/script.js"></script> </body> </html> widgets.js (initLog || (window.initLog = [])).push([new Date().getTime(), "A log is kept during page load so performance can be analyzed and errors pinpointed"]); // Widgets are stored in an object and extended (with jQuery, but I'll probably switch to underscore if using Backbone) as necessary var Widgets = { 1: { // Widget ID, this is set here so widgets can be retreived by ID id: 1, // Widget ID again, this is used after the widget object is duplicated and detached size: 3, // Default size, medium in this case order: 1, // Order shown in "store" name: "Weather", // Widget name interval: 300000, // Refresh interval nicename: "weather", // HTML and JS safe widget name sizes: ["tiny", "small", "medium"], // Available widget sizes desc: "Short widget description", settings: [ { // Widget setting specifications stored as an array of objects. These are used to dynamically generate widget setting popups. type: "list", nicename: "location", label: "Location(s)", placeholder: "Enter a location and press Enter" } ], config: { // Widget settings as stored in the tabs object (see script.js for storage information) size: "medium", location: ["San Francisco, CA"] }, data: {}, // Cached widget data stored locally, this lets it work offline customFunc: function(cb) {}, // Widgets can optionally define custom functions in any part of their object refresh: function() {}, // This fetches data from the web and caches it locally in data, then calls render. It gets called after the page is loaded for faster loads render: function() {} // This renders the widget only using information from data, it's called on page load. } }; script.js (initLog || (window.initLog = [])).push([new Date().getTime(), "These are also at the end of every file"]); // Plugins, extends and globals go here. i.e. Number.prototype.pad = .... var iChrome = function(refresh) { // The main iChrome init, called with refresh when refreshing to not re-run libs iChrome.Status.log("Starting page generation"); // From now on iChrome.Status.log is defined, it's used in place of the initLog iChrome.CSS(); // Dynamically generate CSS based on settings iChrome.Tabs(); // This takes the tabs stored in the storage (see fetching below) and renders all columns and widgets as necessary iChrome.Status.log("Tabs rendered"); // These will be omitted further along in this excerpt, but they're used everywhere // Checks for justInstalled => show getting started are run here /* The main init runs the bare minimum required to display the page, this sets all non-visible or instantly need things (such as widget dragging) on a timeout */ iChrome.deferredTimeout = setTimeout(function() { iChrome.deferred(refresh); // Pass refresh along, see above }, 200); }; iChrome.deferred = function(refresh) {}; // This calls modules one after the next in the appropriate order to finish rendering the page iChrome.Search = function() {}; // Modules have a base init function and are camel-cased and capitalized iChrome.Search.submit = function(val) {}; // Methods within modules are camel-cased and not capitalized /* Extension storage is async and fetched at the beginning of plugins.js, it's then stored in a variable that iChrome.Storage processes. The fetcher checks to see if processStorage is defined, if it is it gets called, otherwise settings are left in iChromeConfig */ var processStorage = function() { iChrome.Storage(function() { iChrome.Templates(); // Templates are read from their elements and held in a cache iChrome(); // Init is called }); }; if (typeof iChromeConfig == "object") { processStorage(); } Objectives of the restructure Memory usage: Chrome apparently has a memory leak in extensions, they're trying to fix it but memory still keeps on getting increased every time the page is loaded. The app also uses a lot on its own. Code readability: At this point I can't follow what's being called in the code. While rewriting the code I plan on properly commenting everything. Module interdependence: Right now modules call each other a lot, AFAIK that's not good at all since any change you make to one module could affect countless others. Fault tolerance: There's very little fault tolerance or error handling right now. If a widget is causing the rest of the page to stop rendering the user should at least be able to remove it. Speed is currently not an issue and I'd like to keep it that way. How I think I should do it The restructure should be done using Backbone.js and events that call modules (i.e. on storage.loaded = init). Modules should each go in their own file, I'm thinking there should be a set of core files that all modules can rely on and call directly and everything else should be event based. Widget structure should be kept largely the same, but maybe they should also be split into their own files. AFAIK you can't load all templates in a folder, therefore they need to stay inline. Grunt should be used to merge all modules, plugins and widgets into one file. Templates should also all be precompiled. Question: Should I follow my current restructure plan? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? Do applications written with Backbone tend to be more intensive (memory and speed) than ones written in Vanilla JS? Also, can I expect to improve this with a proper restructure or is my current code about as good as can be expected?

    Read the article

  • Atlassian Crucible very slow on large repository

    - by Mitch Lindgren
    Hi everyone, My company has been running a trial of Atlassian Crucible for some months now. For repositories where it's working properly, users have given very positive feedback about the tool. The problem I'm having is that we have several different projects, each with its own repository, and some of those repositories are very large. One repository in particular has a large number of branches and probably around 9,000 files per branch. Browsing that repository in Crucible is extremely slow. Crucible is running on a CentOS VM. The VM has 4GB of RAM, and I've set Crucible's maximum at 3GB, of which it is currently using 2GB. I've brought this up in a support ticket with Atlassian, and they suggested the following: In particular because you have a rather large SVN repository you will likely find that Fisheye will be creating a large index file on disk. To help improve performance a few things you can try are: Increasing the available memory available to Fisheye (see the document above). Migrating to an external database: confluence.atlassian.com/display/FISHEYE/Migrating+to+an+External+Database Excluding files and directories from your index that aren't needed: confluence.atlassian.com/display/FISHEYE/Allow+(Process) (Sorry for not hyperlinking; don't have the rep.) I've tried all of these things to an extent, but so far none have helped greatly. I was originally running Crucible on a Windows box with 2GB of RAM using the built in HSQL DB. Moving to MySQL on CentOS saw a performance increase for some repositories, and made Crucible much more stable, but did not seem to help much with our biggest repository. There are only so many files/branches I can exclude from indexing while maintaining the tool's usefulness. That being the case, does anyone have any tips on how to speed up Crucible on large repositories, without investing in insanely powerful hardware? Thanks! Edit: To clarify, since I didn't mention it explicitly above, I am using FishEye.

    Read the article

  • How to make ClearType looks ok even with large fonts

    - by Sorin Sbarnea
    I discovered that on large/huge fonts ClearType does have a negative impact. Just take a look at http://patterns.dataincubator.org/book/ page (check title with huge fonts versus normal text). If you are on Windows 7 you can use Win+Plus/Minus to zoom in/out. I'm looking for a configuration that would make both small fonts and large fonts look well. The system is Windows 7 but I suppose you could replicate the problem on Vista and even XP if you activate ClearType. Current results: ClearType on and tunned - small fonts looking good and large one looking bad ClearType off - small fonts looking bad and large one looking ok (smoothed)

    Read the article

  • Extract large zip file (50 GB) on Mac OS X

    - by chingjun
    I was trying to move the files to another hard drive. So I archived all my photos in one large ZIP file using the Mac OS X built-in compress function. But the file failed to extract. I've tried many programs, but none of the programs I tried were able to extract the file. I've tried Mac OS X's extract utility, StuffIt Expander, 7-Zip (command line), all failed. Mac's archive utility and StuffIt don't seem to support large files, and 7-Zip's command line version gave an error stating unsupported archive. I have no luck in Windows either as many of my files have Chinese filenames, and couldn't extract to the correct name under Windows. Are there some programs that can support large files, can handle files compressed using Mac OS X's compress function, and can support UTF-8 filename? With or without GUI is fine. Update Well, I had made the wrong decision to compress the files, and it's already too late. I thought I should be able to extract the file if I could compress it. It's too late, the original copies are gone, only a large ZIP file left here. I have tried using 'unzip', but it says End-of-central-directory signature not found. I guess it doesn't have large file support as well. I would try the Windows Vista method as stated by SuperMagic, but I need to borrow a computer for that. Anyway, thank you everyone, but please provide more suggestions on what software that could possibly extract that file.

    Read the article

  • Copying large files from USB devices to the internal hard drive fails on Mac OS

    - by John M. P. Knox
    I have a second-generation 13" MacBook Air running Mac OS X 10.6.6 with a 2.13 GHz processor, 4 GB of RAM, and a 256 GB SSD hard disk. I often get failures when I attempt to copy a large file or large collection of files from an external USB drive (typically a "Firewire" generation Drobo) to the internal drive. The failure behaves almost exactly as if I had pulled the USB cable from the computer in mid-transfer. I get a warning that I have removed the hard disk improperly. After this event, the drive no longer appears mounted in the finder, and I have to unplug and reinsert the USB cable to mount the drive again. I have also seen a similar problem when using Aperture 3 to import a large number of photos and videos from a USB Compact Flash card reader. The import will fail and I will have to unplug the Card Reader and import the missing items. Oddly, reversing the direction of the copy seems pretty reliable. I've never had a problem copying a large file to a USB device, meaning that I have quite a few large files which are stranded on my Drobo. Model Identifier: MacBookAir3,2 Boot ROM Version: MBA31.0061.B01 I have seen a similar issue reported on Apple's website: http://discussions.apple.com/thread.jspa?threadID=2648590&tstart=0 The only suggested resolutions there seems to be switching to another form of connectivity (e.g. firewire, which does not exist on MacBook Air), or downgrading to Mac OS 10.6.4, or reverting the USB kernel extensions to the 10.6.4 versions: http://discussions.apple.com/message.jspa?messageID=12566073#12582956 I'm not too keen on the idea of downgrading kernel extensions. Does anyone know of a hardware revision without this issue that I can trade up to? Are there any other potential solutions out there?

    Read the article

  • Synchronize large objects to S3 efficiently

    - by emk
    I need to synchronize about 30GB of git repositories to S3. These repos may contain some very large pack files, on the rough order of 2GB. I know that S3 has recently added support for large objects, and has new APIs that allow the objects to be uploaded as several parallel chunks. Is there a good command-line tool for Linux that allows me to efficiently synchronize large objects with S3 in a fashion similar to s3sync?

    Read the article

  • Front-end structure of large scale Django project

    - by Saike
    Few days ago, I started to work in new company. Before me, all front-end and backend code was written by one man (oh my...). As you know, Django app contains two main directories for front-end: /static - for static(public) files and /templates - for django templates Now, we have large application with more than 10 different modules like: home, admin, spanel, mobile etc. This is current structure of files and directories: FIRST - /static directory. As u can see, it is mixed directories with some named like modules, some contains global libs. one more: SECOND - /templates directory. Some directories named like module with mixed templates, some depends on new version =), some used only in module, but placed globally. and more: I think, that this is ugly, non-maintable, put-in-stress structure! After some time spend, i suggest to use this scheme, that based on module-structure. At first, we have version directories, used for save full project backup, includes: /DEPRECATED directory - for old, unused files and /CURRENT (Active) directory, that contains production version of project. I think it's right, because we can access to older or newer version files fast and easy. Also, we are saved from broken or wrong dependencies between different versions. Second, in every version we have standalone modules and global module. Every module contains own /static and /templates directories. This structure used to avoid broken or wrong dependencies between different modules, because every module has own js app, css tables and local images. Global module contains all libraries, main stylesheets and images like logos or favicon. I think, this structure is much better to maintain, update, refactoring etc. My question is: How do you think, is this scheme better than current? Can this scheme live, or it is not possible to implement this in Django app?

    Read the article

  • Site too large to officially use Google Analytics?

    - by Jeff Atwood
    We just got this email from the Google Analytics team: We love that you love our product and use it as much as you do. We have observed however, that a website you are tracking with Google Analytics is sending over 1 million hits per day to Google Analytics servers. This is well above the "5 million pageviews per month per account" limit specified in the Google Analytics Terms of Service. Processing this amount of data multiple times a day takes up valuable resources that enable us to continue to develop the product for all Google Analytics users. Processing this amount of data multiple times a day takes up valuable resources that enable us to continue to develop the product for all Google Analytics users. As such, starting August 23rd, 2010, the metrics in your reports will be updated once a day, as opposed to multiple times during the course of the day. You will continue to receive all the reports and features in Google Analytics as usual. The only change will be that data for a given day will appear the following day. We trust you understand the reasons for this change. I totally respect this decision, and I think it's very generous to not kick us out. But how do we do this the right way -- what's the official, blessed Google way to use Google Analytics if you're a "whale" website with lots of hits per day? Or, are there other analytics services that would be more appropriate for very large websites?

    Read the article

  • Large invoice database structure and rendering

    - by user132624
    Our client has a MS SQL database that has 1 million customer invoice records in it. Using the database, our client wants its customers to be able to log into a frontend web site and then be able to view, modify and download their company’s invoices. Given the size of the database and the large number of customers who may log into the web site at any time, we are concerned about data base engine performance and web page invoice rendering performance. The 1 million invoice database is for just 90 days sales, so we will remove invoices over 90 days old from the database. Most of the invoices have multiple line items. We can easily convert our invoices into various data formats so for example it is easy for us to convert to and from SQL to XML with related schema and XSLT. Any data conversion would be done on another server so as not to burden the web interface server. We have tentatively decided to run the web site on a .NET Framework IIS web server using MS SQL on MS Azure. How would you suggest we structure our database for best performance? For example, should we put all the invoices of all customers located within the same 5 digit or 6 digit zip codes into the same table? Or could we set up a separate home directory for each customer on IIS and place each customer’s invoices in each customer’s home directory in XML format? And secondly what would you suggest would be the best method to render customer invoices on a web page and allow customers to modify for best performance? The ADO.net XML Data Set looks intriguing to us as a method, but we have never used it.

    Read the article

  • How should I evaluate the Database Solution for Large Data Application

    - by GµårÐïåñ
    Background I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) Help Please I need help with understanding how to evaluate my database options. What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So I believe that leaves me with MS_SQL, SQLite and mySQL (note, I am open to alternatives). And this is where I need help in understanding how to evaluate those databases. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. How should I evaluate my database options for this scenario?

    Read the article

  • What can I do about rsync of large files killing my laptop's wifi connection

    - by David Dean
    When I run a rsync to backup my home folder over the network like so: rsync -avhz --progress --delete /home/dbdean/ [email protected]:/home/backups/david/ I seem to have problems with my quite large .VirtualBox/HardDisks/Windows XP.vdi file. Occasionally the wifi will silently fail (the transfer stops, and any other network access is broken). If I reconnect the wifi to my network before the transfer times out, it happily keeps going (and other network access is back), but I can't just leave it unattended most of the time, as I have to keep an eye on it. I'm guessing this is probably a bug in the wireless card related to a particularly high sustained volume of network usage, but I'm not really sure where to start with diagnosing this problem so that I can provide a good bug report. Or it could be something else, I guess. Any help would be appreciated. My network card is an Atheros Communications Inc. AR9285, as lspci -k shows: 43:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: Hewlett-Packard Company Device 3040 Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • White Paper on Analysis Services Tabular Large-scale Solution #ssas #tabular

    - by Marco Russo (SQLBI)
    Since the first beta of Analysis Services 2012, I worked with many companies designing and implementing solutions based on Analysis Services Tabular. I am glad that Microsoft published a white paper about a case-study using one of these scenarios: An Analysis Services Case Study: Using Tabular Models in a Large-scale Commercial Solution. Alberto Ferrari is the author of the white paper and many people contributed to it. The final result is a very technical document based on a case study, which provides a level of detail that I don’t see often in other case studies (which are usually more marketing-oriented). This white paper has the following structure: Requirements (data model, capacity planning, client tool) Options considered (SQL Server Columnstore Indexes, SSAS Multidimensional, SSAS Tabular) Data Model optimizations (memory compression, query performance, scalability) Partitioning and Processing strategy for near real-time latency Hardware selection (NUMA analysis, Azure VM tests) Scalability tests (estimation of maximum users per node) If you are in charge of evaluating Tabular as analytical engine, or if you have to design your solution based on Tabular, this white paper is a must read. But if you just want to increase your knowledge of Analysis Services, you will find a lot of useful technical information. That said, my favorite quote of the document is the following one, funny but true: […] After several trials, the clear winner was a video gaming machine that one guy on the team used at home. That computer outperformed any available server, running twice as fast as the server-class machines we had in house. At that point, it was clear that the criteria for choosing the server would have to be expanded a bit, simply because it would have been impossible to convince the boss to build a cluster of gaming machines and trust it to serve our customers.  But, honestly, if a business has the flexibility to buy gaming machines (assuming the machines can handle capacity) – do this. Owen Graupman, inContact I want to write a longer discussion about how companies are adopting Tabular in scenarios where it is the hidden engine of a more complex solution (and not the classical “BI system”), because it is more frequent than you might expect (and has several advantages over many alternative approaches).

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >