Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 299/457 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • Which sector in IT industry best suites my career needs?

    - by Shailesh Tainwala
    I am a student of software engineering and will be graduating in a years time. I want to get a few years of work experience before considering further studies. I like the idea of working on projects developing end-to-end systems for medium/large enterprises in different domains. My area of special interest is AI and data-mining. ERP and MIS are terms that closely resemble what I am driving at. What type of companies should I be ideally looking at?

    Read the article

  • MS Access Mark Duplicates in order of appearance - using the function RankOfDup: (SELECT Count(*) ...)

    - by veska stoyanova
    I'm trying to create Ranking that shows the sequence of agreements for the two fields Customers and Agreements. The number for agreements must be unique, whereas customers can repeat. The formula RankOfDup: (SELECT Count(*) FROM Data a WHERE a.customer=Data.customer And a.agreement >= Data.agreement) Works beautifully but after this query with columns Agreement, Customer and RankofDup, I need to create crosstab that transposes the RankofDub. It works when I make the table first and then create query but my data is too large so I'm trying to put the select query with the ranking in a crosstab query. However, when I try to do this Access gives error message that microsoft jet ... doesn't recognise Data.customer? Any ideas how I can fix this?

    Read the article

  • Graph diffing and versioning tool

    - by hashable
    I am working with a team that edits large DAGs represented as single files. Currently we are unable to work with multiple users concurrently modifying the DAG. Is there a tool (somewhat like the Eclipse SVN plugin) that can do do revision control on the file (manage timestamps/revision stamps) to identify incoming/outgoing/conflicting changes (Node/Link insertion/deletion/modification) and merge changes just like programmers do with source code files? The system should be able to do dependency management also. E.g. an incoming Link must not be accepted when one of the two Nodes is absent. That is, it should not "break" the existing DAG by allowing partial updates. If there is a framework to do this using generic "Node" and "Link" interfaces? Note: I am aware of Protege and its plugins. They currently do not satisfy my requirements.

    Read the article

  • Parsing a CSV File to a Rails Database

    - by Schroedinger
    G'day guys, I'm using fasterCSV and a rake script to parse a csv with about 30 columns into my rails db for a 'Trade' item. The script works fine when all of the values are set to strings, but when I change it to a decimal, int or other value, everything goes to hell. Wondering if fasterCSV has built in int etc parsing or whether I'll have to manage these within my model. Basically, I'm given a giant amount of trades data, need to import it, and then need to provide feedback with say the average trade volume, the times, etc. I understand I can do that all with the wonderful records provided to me by activeRecord but wondered if there was an easier way to populate a rather large Database with a given CSV? Several of the fields don't have values for certain rows, fasterCSV seems to work perfectly when they're all strings, but not when I try to get decimal or other.

    Read the article

  • Embarrassingly parallel workflow creates too many output files

    - by Hooked
    On a Linux cluster I run many (N > 10^6) independent computations. Each computation takes only a few minutes and the output is a handful of lines. When N was small I was able to store each result in a separate file to be parsed later. With large N however, I find that I am wasting storage space (for the file creation) and simple commands like ls require extra care due to internal limits of bash: -bash: /bin/ls: Argument list too long. Each computation is required to run through a qsub scheduling algorithm so I am unable to create a master program which simply aggregates the output data to a single file. The simple solution of appending to a single fails when two programs finish at the same time and interleave their output. I have no admin access to the cluster, so installing a system-wide database is not an option. How can I collate the output data from embarrassingly parallel computation before it gets unmanageable?

    Read the article

  • Export images from flash

    - by fabieno
    First of all let my clarify that I am a flash noob, this is a freelance job I am doing for someone. I have a flash files with symbols I need to export as PNG images, for some reason the exported images have different width and height than indicated in the flash file. After checking I found out that the new dimensions don't even keep the original ratio between height and width. This happens for several symbols at different sizes. What might be the reason for this? I have also considered finding a way to take a snapshot from within flash of a slice in the flash movie, is that possible? Understand that I cannot manually take the snapshots as I need this done on a very large quantity of symbols. Thank you

    Read the article

  • Calcualting Optimal Site to Site Routing using pre-computed times between sites

    - by Idistic
    Assume that I have a number of sites (locations) and the time it takes to travel from each site to each site is pre-computed Example Data - Site to Site Pre-Calculated Times in minutes From Start Site To A 20 To B 15 To C 15 From Site A To B 10 To C 15 Site B To A 10 To C 20 Site C To A 15 To B 20 4 Sites is fairly simple, but what if the site set was say 1000 sites? Given a large site set What would the best approach be to quickly find the optimal routes from the start site while visiting every other site just once? Route Solutions from Start Site for 3 sites from a start site 1 A(20) B(10) C(20) = 50 2 A(20) C(15) B(20) = 55 3 B(15) A(10) C(15) = 40 4 B(15) C(20) A(15) = 50 5 C(15) A(15) B(10) = 40 6 C(15) B(20) A(10) = 45

    Read the article

  • What messaging technologies in windows-ce for guaranteed msg delivery?

    - by Aidanapword
    We are building a windows-ce (6.0R3) based device that requires guaranteed and audit-ready message delivery (including store & forward) up to and down from the cloud. I have been looking for choices beyond: MSMQ a proprietary solution (what our prototype device is using) AMQP (I have not found any RabbitMQ clients for CE, by example) ... are there any others? We will be transporting sensitive data (who isn't?!?!) over a public network, and large scale options are required. Anything running on an embedded device will be performance sensitive too.

    Read the article

  • Programatically prevent ASP .NET AJAX scripts from rendering

    - by cbp
    Does anyone know of a way that I can stop all the .NET AJAX scripts from rendering, even if a ScriptManager exists on the page? The ScriptManager's Visible property has been overridden and disabled so that you receive a NotImplementedException if you try to set the Visible property. The reason I would like to do this is that I don't want these large chunks of javascript all over my pages when they are not required. The ScriptManager needs to be included on the master page to ensure that only one ScriptManager is added, but it would be stupid to to have to have two versions of the same master page - one ajax enabled and one not. Edit: I am actually using Telerik's RadScriptManager with RadAjax, in case anyone knows a method using these classes instead.

    Read the article

  • Can I use Duff's Device on an array in C?

    - by Ben Fossen
    I have a loop here and I want to make it run faster. I am passing in a large array. I recently heard of Duff's Device can it be applied to this for loop? any ideas? for (i = 0; i < dim; i++) { for (j = 0; j < dim; j++) { dst[RIDX(dim-1-j, i, dim)] = src[RIDX(i, j, dim)]; } }

    Read the article

  • Running junit tests in parallel ?

    - by krosenvold
    I'm using junit 4.4 and maven and I have a large number of long-running integration tests. When it comes to parallellizing test suites there are a few solutions that allow me to run each test method in a single test-class in parallel. But all of these require that I change the tests in one way or another. I really think it would be a much cleaner solution to run X different test classes in X threads in parallel. I have hundreds of tests so I don't really care about threading individual test-classes. Is there any way to do this ?

    Read the article

  • Wordpress Taxonomy

    - by ninusik
    I am creating a Wordpress blog (no live link yet because it's still at a planning stage). I want to set up the following tag structure: Category 1: Services Tags: Web design, logo design, print design, etc etc. Category 2: Type of clients Tags: small businesses, large companies So each post will be tagged with one or more tags from Category 1, and one tag from Category 2. However, I heard that using more than one Category per post is a bad idea in terms of SEO? But then, how can I go about it? I don't want any SEO penalties, but I will need to somehow categorize each post using these 2 distinct categories. Should I create some custom taxonomies? That seems like an overkill to me. The solution is probably something rather simple but it just escapes me. I'm not very experienced with taxonomies so I'll appreciate any suggestions.

    Read the article

  • Is there any way to "peek" at a file while it's uploading through HTTP onto a Windows box?

    - by iisystems
    I need to add a file upload function to an ASP.NET website and would like to be able to read a small portion of the file on the server while it's still uploading. A peek or preview type function so I can determine contents and give some feedback to the user while it is still uploading (we're talking about large files here). Is there any way to do this? I'm thinking worst case of writing a custom control which uploads only a fixed number of bytes of the file once chosen and then under the covers starts another upload of the full file. Not totally sure even this is possible, but I'm looking for a more elegant solution anyway... Thanks!

    Read the article

  • How to index a string like "aaa.bbb.ddd-fff" in Lucene?

    - by user46703
    Hi, I have to index a lot documents that contain reference numbers like "aaa.bbb.ddd-fff". The structure can change but it's always some arbitrary numbers or characters combined with "/","-","_" or some other delimiter. The users want to be able to search for any of the substrings like "aaa" or "ddd" and also for combinations like "aaa.bbb" or "ddd-fff". The best I have been able to come up with is to create my own token filter modeled after the synonym filter in "Lucene in action" which spits out multiple terms for each input. In my case I return "aaa.bbb", "bbb.ddd","bbb.ddd-fff" and all other combinations of the substrings. This works pretty well but when I index large documents (100MB) that contain lots of such strings I tend to get out of memory exceptions because my filter returns multiple terms for each input string. Is there a better way to index these strings?

    Read the article

  • Most efficient approach for multilingual PHP website

    - by alexteg
    I am working on a large multilingual website and I am considering different approaches for making it multilingual. The possible alternatives I can think of are: The Gettext functions with generation of .po files One MySQL table with the translations and a unique string ID for each text PHP-files with arrays containing the different translations with unique string IDs As far as I have understood the Gettext functions should be most efficient, but my requirement is that it should be possible to change a text string in the original reference language (English) without the other translations of that string automatically reverting back to English just because a couple of words changed. Is this possible with Gettext? What is the least resource demanding solution? Is using the Gettext functions or PHP files with arrays more or less equally resource demanding? Any other suggestions for more efficient solutions?

    Read the article

  • Database for Importing NUnit results?

    - by McWafflestix
    I have a large set of NUnit tests; I need to import the results from a given run into a database, then characterize the set of results and present them to the users (email for test failures, web presentation for examining results). I need to be tracking multiple runs over time, as well (for reporting failure rates over time, etc.). The XML will be the XML generated by nunit-console. I would like to import the XML with a minimum of fuss into some database that can then be used to persist and present results. We will have a number of custom categories that we will need to be able to sort across, as well. Does anyone know of a database schema that can handle importing this type of data that can be customized to our individual needs? This type of problem seems like it should be common, and so a common solution should exist for it, but I can't seem to find one. If anyone has implemented such a solution before, advice would be appreciated as well.

    Read the article

  • what is a performance way to 'tree-walking' through my Entity Framework data

    - by Greg
    Hi, I have a Entity Framework design with a few tables that define a "graph". So there can be a large chain of relationships between objects in the few tables via concept of parent/child relationships. What is a performance way to 'tree-walking' through my Entity Framework data? That is I assume I wouldn't want to load the full set of all NODES and RELATIONSHIPS from the database for the purpose of walking the tree, where the end result may only be identifying leaf nodes? Or would this be OK with the way lazy loading may work at the column/parameter level? Else how could I load just the skeleton of the objects and then when needing to refer to any attributes have them lazy load then?

    Read the article

  • C++ Memory Allocation & Linked List Implementation

    - by pws5068
    I'm writing software to simulate the "first-fit" memory allocation schema. Basically, I allocate a large X megabyte chunk of memory and subdivide it into blocks when chunks are requested according to the schema. I'm using a linked list called "node" as a header for each block of memory (so that we can find the next block without tediously looping through every address value. head_ptr = (char*) malloc(total_size + sizeof(node)); if(head_ptr == NULL) return -1; // Malloc Error .. :-( node* head_node = new node; // Build block header head_node->next = NULL; head_node->previous = NULL; // Header points to next block (which doesn't exist yet) memset(head_ptr,head_node, sizeof(node)); ` But this last line returns: error: invalid conversion from 'node*' to 'int' I understand why this is invalid.. but how can I place my node into the pointer location of my newly allocated memory?

    Read the article

  • Is there a built-in way to determine the size of a WCF response?

    - by jaminto
    Before a client gets the full payload of the web request, we'd like to first send it a measurement of the size of the response it will get. If the response will be too large, the client will present a message to the user giving them the option to abort the operation. We can write some custom code to preload the response on the server, determine the size, and then pass it on to the client, but we'd rather not if there's another way to do it. Does anyone know if WCF has any tricky way to do this? Or are there any free third party tools out there that will accomplish this? Thanks.

    Read the article

  • Using Apache Velocity with StringBuilders/CharSequences

    - by mindas
    We are using Apache Velocity for dynamic templates. At the moment Velocity has following methods for evaluation/replacing: public static boolean evaluate(Context context, Writer writer, String logTag, Reader reader) public static boolean evaluate(Context context, Writer out, String logTag, String instring) We use these methods by providing StringWriter to write evaluation results. Our incoming data is coming in StringBuilder format so we use StringBuilder.toString and feed it as instring. The problem is that our templates are fairly large (can be megabytes, tens of Ms on rare cases), replacements occur very frequently and each replacement operation triples the amount of required memory (incoming data + StringBuilder.toString() which creates a new copy + outgoing data). I was wondering if there is a way to improve this. E.g. if I could find a way to provide a Reader and Writer on top of same StringBuilder instance that only uses extra memory for in/out differences, would that be a good approach? Has anybody done anything similar and could share any source for such a class? Or maybe there any better solutions to given problem?

    Read the article

  • Hashing a python method to regenerate output when method is modified

    - by Seth Johnson
    I have a python method that has a deterministic result. It takes a long time to run and generates a large output: def time_consuming_method(): # lots_of_computing_time to come up with the_result return the_result I modify time_consuming_method from time to time, but I would like to avoid having it run again while it's unchanged. [Time_consuming_method only depends on functions that are immutable for the purposes considered here; i.e. it might have functions from Python libraries but not from other pieces of my code that I'd change.] The solution that suggests itself to me is to cache the output and also cache some "hash" of the function. If the hash changes, the function will have been modified, and we have to re-generate the output. Is this possible or a ridiculous idea? If this isn't a terrible idea, is the best implementation to write f = """ def ridiculous_method(): a = # # lots_of_computing_time return a """ , use the hashlib module to compute a hash for f, and use compile or eval to run it as code?

    Read the article

  • I'm working on a website that sells different artwork, what's the best way to handle different image

    - by ThinkingInBits
    I'm working on a website that will allow users to upload and sell their artwork in different sizes. I was wondering what the best way would be to handle the different file sizes automatically. A few points I was curious on: How to define different size categories (small, medium, large) in such a way that I'll be able to dynamically re-size images with proportional dimensions. Should I store actual jpegs of the different sizes for download? Or would it be easier to generate these different sizes for download on the fly My thumbnails will be somewhat larger than your average thumbnails, should I store a second 'thumbnail image' with the sites watermark overlaying it? Or once again, generate this on the fly? All opinions, advice are greatly appreciated!

    Read the article

  • Should I use Drupal or Kohana-type framework for a web "application"

    - by Andres
    The debate is that I need a PHP Framework/Drupal with the flexibility to add custom features to a potentially large application (web and with an api). However, with a framework, like Kohana, I see myself tackling and re-inventing the wheel with the simple stuff like account management and cms stuff. Account management and quick data collection, like fast form creation, are tedious in Kohana but appear incredible simple in Drupal. On the other hand, based on my limited Drupal experience, I doubt building rapid custom "features" and allowing users to create "groups" and to manage their own roles within those groups is something Drupal can easily accomplish. To simplify, is Drupal capable of true Web Applications; where the application is a service and provides custom results to each user? Can it provide a dashboard-like interface for users to change their settings or preferences? Can it aggregate data from particular users to provide better results/info to others? If so, please point me to some knowledge :-)

    Read the article

  • SQL Server switch to MySQL/PostgreSQL for startup?

    - by chopps
    I just checked out the licensing for SQL Server and well...i can't afford it since im funding this project myself. I have been tinkering with MySQL and PostgreSQL a bit the past few weeks and at this point I can't really decide which to go with. MySQL has a large user base and lots of people using it so finding out how to do various items will not be to hard o find. I will be using ASP.NET with this project. Anyone have experience going from SQL Server to either of these databases? Is one stronger than the other? Thoughts?

    Read the article

  • Most efficient algorithm for mesh-level, optimal occlusion culling?

    - by Fredriku73
    I am new to culling. On a first glance, it seems that most occlusion culling algorithms are object-level, not examining single meshes, which would be practical for game rendering. What I am looking for is an algorithm that culls all meshes within a single object that are occluded for a given viewpoint, with high accuracy. It needs to be at least O(n log n), a naive mesh-by-mesh comparison (O(n^2)) is too slow. I notice that the Blender GUI identifies the occluded meshes for you in real-time, even if you work with large objects of 10,000+ meshes. What algorithm is used there, pray tell?

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >