Search Results

Search found 18409 results on 737 pages for 'large projects'.

Page 42/737 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • .NET projects build automation with NAnt/MSBuild + SVN

    - by petr k.
    Hi everyone, for quite a while now, I've been trying to figure out how to setup an automated build process at our shop. I've read many posts and guides on this matter and none of them really fits my specifics needs. My SVN repository is laid out as follows \projects \projectA (a product) \tags \1.0.0.1 \1.0.0.2 ... \trunk \src \proj1 (a VS C# project) \proj2 \documentation Then I have a network share, with a folder for each project (product), which in turn contains the binaries, written documentation and the generated API documentation (via NDoc - each project may have an .ndoc file in the repository) for every historical version (from the tags SVN folder) and for the latest version as well (from the trunk). Basically, what I want to do in a scheduled batch build are these steps: examine the project's SVN folder and identify tags not present in the network share for each of these tags check out the tag folder build (with Release config) copy the resulting binaries to the network share search for .ndoc files generate CHM files via NDoc copy the resulting CHM files to the network share do the same as in 2., but for the HEAD revision of trunk Now, the trouble is, I have no idea where to start. I do not keep .sln files in the repository, but I am able to replace these with MSBuild files which in turn build the C# projects belonging to the specific product. I guess the most troubling part is the examination of the repository for tags which have not been processed yet - i.e. searching the tags and comparing them to a project's directory structure on the network share. I have no idea how to do that in any of the build tools (NAnt, MSBuild). Could you please provide me with some pointers on how to approach this task as a whole and in detail as well? I do not care if I use NAnt, MSBuild, or both. I am aware that this might be rather complex, but every idea and NAnt/MSBuild snippet will be a great help. Thanks in advance.

    Read the article

  • Text mining on large database (data mining)

    - by yox
    Hello, I have a large database of resumes (CV), and a certain table skills grouping all users skills. inside that table there's a field skill_text that describes the skill in full text. I'm looking for an algorithm/software/method to extract significant terms/phrases from that table in order to build a new table with standarized skills.. Here are some examples skills extracted from the DB : Sectoral and competitive analysis Business Development (incl. in international settings) Specific structure and road design software - Microstation, Macao, AutoCAD (basic knowledge) Creative work (Photoshop, In-Design, Illustrator) checking and reporting back on campaign progress organising and attending events and exhibitions Development : Aptana Studio, PHP, HTML, CSS, JavaScript, SQL, AJAX Discipline: One to one marketing, E-marketing (SEO & SEA, display, emailing, affiliate program) Mix marketing, Viral Marketing, Social network marketing. The output shoud be something like : Sectoral and competitive analysis Business Development Specific structure and road design software - Macao AutoCAD Photoshop In-Design Illustrator organising events Development Aptana Studio PHP HTML CSS JavaScript SQL AJAX Mix marketing Viral Marketing Social network marketing emailing SEO One to one marketing As you see only skills remains no other representation text. I know this is possible using text mining technics but how to do it ? the database is realy large.. it's a good thing because we can calculate text frequency and decide if it's a real skill or just meaningless text... The big problem is .. how to determin that "blablabla" is a skill ? thanks

    Read the article

  • Visual Studio 2008 project organization for executable and assembly

    - by user304582
    Hi - I am having a problem setting up the following in Visual Studio 2008: a parent project which includes the entrypoint Main() method class and which declares an interface, and a child project which has classes that implement the interface declared in the parent project. I have specified that Parent's Output type is a Console application, and Child's Output type is a Class library. In Child I have add a reference to the Parent as a project, and specified that Child depends on Parent and that the build order should be Parent, then Child. The build succeeds, and as far I can tell, the right things show up in the Child/bin/debug directory: Parent.exe and Child.dll. However, if I run Parent.exe, then at the point when it should load a class from the Child.dll, it fails with the error message: exception executing operation System.TypeLoadException: Could not load type 'Child.some.class' from assembly 'Parent, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. I guess I'm confused as to how to get the Parent and Child projects to play together. I plan on having more child projects that use the same framework that is set up in the Parent, and so I do not want to move the entrypoint class down into the Child project. If I try to specify that the Child project is also a Console application, then the build process fails because there is no Main() entrypoint class in the child (even though the Parent project is included as a reference). Any help would be welcome! Thanks, Martin

    Read the article

  • nHibernate Domain Model and Mapping Files in Separate Projects

    - by Blake Blackwell
    Is there a way to separate out the domain objects and mapping files into two separate projects? I would like to create one project called MyCompany.MyProduct.Core that contains my domain model, and another project that is called MyCompany.MYProduct.Data.Oracle that contains my Oracle data mappings. However, when I try to unit test this I get the following error message: Named query 'GetClients' not found. Here is my mapping file: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="MyCompany.MyProduct.Core" namespace="MyCompany.MyProduct.Core" > <class name="MyCompany.MyProduct.Core.Client" table="MY_CLIENT" lazy="false"> <id name="ClientId" column="ClientId"></id> <property name="ClientName" column="ClientName" /> <loader query-ref="GetClients"/> </class> <sql-query name="GetClients" callable="true"> <return class="Client" /> call procedure MyPackage.GetClients(:int_SummitGroupId) </sql-query> </hibernate-mapping> Here is my unit test: try { var cfg = new Configuration(); cfg.Configure(); cfg.AddAssembly( typeof( Client ).Assembly ); ISessionFactory sessionFactory = cfg.BuildSessionFactory(); IStatelessSession session = sessionFactory.OpenStatelessSession(); IQuery query = session.GetNamedQuery( "GetClients" ); query.SetParameter( "int_SummitGroupId", 3173 ); IList<Client> clients = query.List<Client>(); Assert.AreNotEqual( 0, clients.Count ); } catch( Exception ex ) { throw ex; } I think I may be improperly referencing the assembly, because if I do put the domain model object in the MyComapny.MyProduct.Data.Oracle class it works. Only when I separate out in to two projects do I run into this problem.

    Read the article

  • Error building Visual Studio 2010 Silverlight 4 projects on Windows 7 with XP Mode

    - by Kevin Dente
    I installed Visual Studio 2010 Beta 2 in an XP Mode VM on Windows 7. Then I created a trivial Silverlight 4 (beta) project and tried to build it. I get the following error: Error 1 The "ValidateXaml" task failed unexpectedly. System.IO.FileLoadException: Could not load file or assembly 'file://\tsclient\d\Users\me\Documents\Visual Studio 2010\Projects\SilverlightApplication2\SilverlightApplication2\obj\Debug\SilverlightApplication2.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) File name: 'file://\tsclient\d\Users\me\Documents\Visual Studio 2010\Projects\SilverlightApplication2\SilverlightApplication2\obj\Debug\SilverlightApplication2.dll' --- System.NotSupportedException: An attempt was made to load an assembly from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework. This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous. If this load is not intended to sandbox the assembly, please enable the loadFromRemoteSources switch. See http://go.microsoft.com/fwlink/?LinkId=155569 for more information. at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue, AssemblyHashAlgorithm hashAlgorithm, Boolean forIntrospection, Boolean suppressSecurityChecks, StackCrawlMark& stackMark) at System.Reflection.Assembly.LoadFrom(String assemblyFile) at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute(ITask task) at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute(ITask task) at Microsoft.Silverlight.Build.Tasks.ValidateXaml.Execute() at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult) I believe this is related to the fact that XP Mode redirects the My Documents folder to the host, turning it into a network share location, and some sort of CAS / security policy is being triggered. Anyone know how to fix it?

    Read the article

  • How to protect copyright on large web applications?

    - by Saif Bechan
    Recently I have read about Myows, described as "the universal copyright management and protection app for smart creatives". It is used to protect copyright and more. Currently I am working on a large web application which is in late testing phase. Because of the complexity of the app there are not many versions of it online so copyright will be a huge issue for me as much of the code is in JavaScript and is easy copyable. I was glad to see that there is some company out there that provide such services, and naturally I wanted to know if there were people using it. I did not know that this type of concept was so new. Is protecting copyright a good idea for a large web application? If so, do you think Myows will be worth using, or are there better ways to achieve that? Update: Wow, there is no better person to have answered this question like the creator himself. There were some nice points made in the answer, and I think it will be a good service for people like me. In the next couple of weeks I will be looking further into the subject and start uploading my code and see how it works out. I will leave this question up because I do want some more suggestions on this topic.

    Read the article

  • PostgreSQL: BYTEA vs OID+Large Object?

    - by mlaverd
    I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob. Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant. I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field. So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance. Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?

    Read the article

  • Looking for advice on importing large dataset in sqlite and Cocoa/Objective-C

    - by jluckyiv
    I have a fairly large hierarchical dataset I'm importing. The total size of the database after import is about 270MB in sqlite. My current method works, but I know I'm hogging memory as I do it. For instance, if I run with Zombies, my system freezes up (although it will execute just fine if I don't use that Instrument). I was hoping for some algorithm advice. I have three hierarchical tables comprising about 400,000 records. The highest level has about 30 records, the next has about 20,000, the last has the balance. Right now, I'm using nested for loops to import. I know I'm creating an unreasonably large object graph, but I'm also looking to serialize to JSON or XML because I want to break up the records into downloadable chunks for the end user to import a la carte. I have the code written to do the serialization, but I'm wondering if I can serialize the object graph if I only have pieces in memory. Here's pseudocode showing the basic process for sqlite import. I left out the unnecessary detail. [database open]; [database beginTransaction]; NSArray *firstLevels = [[FirstLevel fetchFromURL:url retain]; for (FirstLevel *firstLevel in firstLevels) { [firstLevel save]; int id1 = [firstLevel primaryKey]; NSArray *secondLevels = [[SecondLevel fetchFromURL:url] retain]; for (SecondLevel *secondLevel in secondLevels) { [secondLevel saveWithForeignKey:id1]; int id2 = [secondLevel primaryKey]; NSArray *thirdLevels = [[ThirdLevel fetchFromURL:url] retain]; for (ThirdLevel *thirdLevel in thirdLevels) { [thirdLevel saveWithForeignKey:id2]; } [database commit]; [database beginTransaction]; [thirdLevels release]; } [secondLevels release]; } [database commit]; [database release]; [firstLevels release];

    Read the article

  • Efficient way to create a large number of SharePoint folders

    - by BeraCim
    Hi all: I'm currently creating a large number of SharePoint folders within a list (e.g. ~800 folders), with each folder containing a different number of items. The way it is currently done is that it programmatically reads off the content types, items, event listeners and the likes off the same folder from another web, then creates the same folder in the current web. That ran reasonably fine and fast on a dev environment. However when it goes to an environment with WFEs and farms, it slowed down a lot. I have checked that there are no leaks in the code, and that the code follows SharePoint coding best practices. At the moment I'm looking at it at the code level. From your experience, are there any efficient ways of creating a large number of SharePoint folders, lists and items? EDIT: I'm currently using SharePoint API, but will be looking at moving to using Web Service in the future. I'm interested in looking at both options though. Code wise, its just the general reading of a folder and its content types plus items and their details, then create the same folder in the same list with the same content types, then copy over the items using patch update. I want to know whether there are more efficient ways of doing the above. Thanks.

    Read the article

  • Discover periodic patterns in a large data-set

    - by Miner
    I have a large sequence of tuples on disk in the form (t1, k1) (t2, k2) ... (tn, kn) ti is a monotonically increasing timestamp and ki is a key (assume a fixed length string if needed). Neither ti nor ki are guaranteed to be unique. However, the number of unique tis and kis is huge (millions). n itself is very large (100 Million+) and the size of k (approx 500 bytes) makes it impossible to store everything in memory. I would like to find out periodic occurrences of keys in this sequence. For example, if I have the sequence (1, a) (2, b) (3, c) (4, b) (5, a) (6, b) (7, d) (8, b) (9, a) (10, b) The algorithm should emit (a, 4) and (b, 2). That is a occurs with a period of 4 and b occurs with a period of 2. If I build a hash of all keys and store the average of the difference between consecutive timestamps of each key and a std deviation of the same, I might be able to make a pass, and report only the ones that have an acceptable std deviation(ideally, 0). However, it requires one bucket per unique key, whereas in practice, I might have very few really periodic patterns. Any better ways?

    Read the article

  • Can a large transaction log cause cpu hikes to occur

    - by Simon Rigby
    Hello all, I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out. I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan. The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur. I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk. I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation). Many Thanks,

    Read the article

  • Getting Recognition for Open-Source Computer Language Projects

    - by Jon Purdy
    I like language a lot, so I write a lot of language-based solutions for programming, automation, and data definition. I'm very much a believer in open-source software, so lately I've started to push these projects to Sourceforge when I start them. I feel that these tools could be quite valuable in the right hands, and that they fill niches that otherwise go unfilled. The trouble, for me, is gaining recognition. No matter how useful the software I write, after a certain point I can no longer come up with anything to add or improve. Basically no one but me uses it, so it's not being attacked from enough angles to discover any new weaknesses. I cannot work on a project that doesn't have anything to do, but I won't have anything to do unless I gain recognition by working on it! This is greatly discouraging. It's like giving what you think is a really thoughtful gift to someone who just isn't paying attention. So I'm looking for advice on how to network and disseminate information about my projects so that they don't fizzle out like this. Are there any sites, newsgroups, or mailing lists that I've been completely missing?

    Read the article

  • Migrating a Large amount of data from old publishing site to new site

    - by tommizzle
    Hi, I am currently in the process of creating a new news/publishing site on the Movable Type platform. There are around 20 or so sites with 20,000+ rows of data to be moved/aggregated to ~8 sites (we have a number of location specific sites and are going to aggregate the content from these into 1 single site for each niche). We have discussed how to do this and came to the conclusion that it would probably be better to hire somebody to do it (I could probably do it, but i'm limited on time and am sure that a specialist would be more efficient). So my questions to you guys are: 1) What kind of skill set should we look for in an applicant? 2) There will be a large amount of input from our side... is getting somebody to work remotely out of the question? 3) How long would a task like this traditionally take (I know this question is very subjective, but an estimation would be awesome)? 4) Do you have any recommendations for firms who would be able to take on a large task like this? Thanks in advance, Tom

    Read the article

  • Qt/C++, Problems with large QImage

    - by David Günzel
    I'm pretty new to C++/Qt and I'm trying to create an application with Visual Studio C++ and Qt (4.8.3). The application displays images using a QGraphicsView, I need to change the images at pixel level. The basic code is (simplified): QImage* img = new QImage(img_width,img_height,QImage::Format_RGB32); while(do_some_stuff) { img->setPixel(x,y,color); } QGraphicsPixmapItem* pm = new QGraphicsPixmapItem(QPixmap::fromImage(*img)); QGraphicsScene* sc = new QGraphicsScene; sc->setSceneRect(0,0,img->width(),img->height()); sc->addItem(pm); ui.graphicsView->setScene(sc); This works well for images up to around 12000x6000 pixel. The weird thing happens beyond this size. When I set img_width=16000 and img_height=8000, for example, the line img = new QImage(...) returns a null image. The image data should be around 512,000,000 bytes, so it shouldn't be too large, even on a 32 bit system. Also, my machine (Win 7 64bit, 8 GB RAM) should be capable of holding the data. I've also tried this version: uchar* imgbuf = (uchar*) malloc(img_width*img_height*4); QImage* img = new QImage(imgbuf,img_width,img_height,QImage::Format_RGB32); At first, this works. The img pointer is valid and calling img-width() for example returns the correct image width (instead of 0, in case the image pointer is null). But as soon as I call img-setPixel(), the pointer becomes null and img-width() returns 0. So what am I doing wrong? Or is there a better way of modifying large images on pixel level? Regards, David

    Read the article

  • Windows.Forms RichTextBox Control - Avoid inserting large data.

    - by SchlaWiener
    I have a Windows Form with a RichTextBox on it. The content of the RichTextBox is written to a database field that ist limited to 64k data. For my purpose that is way more than enough text to store. I have set the MaxLength property to avoid insertng more data than allowed. rtcControl.MaxLength = 65536 Howevery, that only restricts the amount of characters that so is allowed to put in the text. But with the formatting overhead from the Rtf I can type more text than I should be allowed to. It even get's worse if I insert a large image, which dosn't increase the TextLength at all but the Rtf Length grows quite a lot. At the moment I check the Length of the richttextboxes' Rtf property in the FormClosing event and display a message to the user if it's to large. However that is just a workaround because I want to disallow putting more data than allowed into the control (like in a textbox if you exceed the MaxLength property nothing is inserted into the control and you hear the default beep(). Any ideas how to achive this? I already tried: using a custom control which extends the richtextbox and shadows th Rtf property to intercept the insertation. But it seems it isn't executed if I add text. Even the TextChanged Event does not fire if I type smth. in the control.

    Read the article

  • Git repository gets corrupted when I do a large commit: "Possible repository corruption on the remot

    - by mindthief
    Hi All, A friend of mine and I have been trying to use git for a project. It is hosted on his server, and I git clone it as: git clone [email protected]:/path/to/git/repos.git Pretty standard stuff, and it works great for a while. But every time one of us has added a large commit (which git supposedly handles very well), of the order of 100MB or so, the git repository gets kind of broken. Basically, at this point I will be able to push new changes and pull other changes (I think), but when I try to clone the repository in a fresh location using that command above, I get an error message that says: $git clone [email protected]:/path/to/git/repos.git Initialized empty Git repository in /local/path/to/repos/.git/ remote: Counting objects: 1455, done. remote: Compressing objects: 100% (1235/1235), done. error: git upload-pack: git-pack-objects died with error.s fatal: git upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed This has happened 3 or 4 times now, and it's always when I add a large commit. Any idea why this is happening? How can we fix it? We're both using Mac OSX Snow Leopard. Thanks! -M

    Read the article

  • Memorystream and Large Object Heap

    - by Flo
    I have to transfer large files between computers on via unreliable connections using WCF. Because I want to be able to resume the file and I don't want to be limited in my filesize by WCF, I am chunking the files into 1MB pieces. These "chunk" are transported as stream. Which works quite nice, so far. My steps are: open filestream read chunk from file into byet[] and create memorystream transfer chunk back to 2. until the whole file is sent My problem is in step 2. I assume that when I create a memory stream from a byte array, it will end up on the LOH and ultimately cause an outofmemory exception. I could not actually create this error, maybe I am wrong in my assumption. Now, I don't want to send the byte[] in the message, as WCF will tell me the array size is too big. I can change the max allowed array size and/or the size of my chunk, but I hope there is another solution. My actual question(s): Will my current solution create objects on the LOH and will that cause me problem? Is there a better way to solve this? Btw.: On the receiving side I simple read smaller chunks from the arriving stream and write them directly into the file, so no large byte arrays involved.

    Read the article

  • Consecutive build of VS2005 and VS2008 C++ projects causes LNK1104 error

    - by TestAccount
    I have VS2005 and VS2008 installed on the same machine. I also have a common codebase that I build using both '05 and '08. For this purpose, I have 2 VC projects.. A '08 project called XYZ_2008.vcproj and a '05 project called XYZ_2005.vcproj, and the corresponding 2 slns as well. Both projects output dlls, libs and pdbs to the same output directory (all with appropriate _2005 and _2008 suffixes). Assuming that I am starting from a clean state, I first open XYZ_2005.sln (containing XYZ_2005.vcproj) in VS2005 and build it successfully. Then I close VS2005. Next, I open XYZ_2008.sln (containing XYZ_2008.vcproj) and build (not rebuild) it. At this point, I get an error saying: LINK : fatal error LNK1104: cannot open file 'mfc80u.lib' If now I rebuild the '08 solution, the error goes away and the build succeeds. The build also succeeds if I directly do a rebuild instead of a build for the '08 sln. In spite of everything being separate, the VS08 build seems to be picking up a MFC8 file (from VS05) instead of a MFC9 file. Can somebody please help out with this issue? Thanks in advance!

    Read the article

  • incremental way of counting quantiles for large set of data

    - by Gacek
    I need to count the quantiles for a large set of data. Let's assume we can get the data only through some portions (i.e. one row of a large matrix). To count the Q3 quantile one need to get all the portions of the data and store it somewhere, then sort it and count the quantile: List<double> allData = new List<double>(); foreach(var row in matrix) // this is only example. In fact the portions of data are not rows of some matrix { allData.AddRange(row); } allData.Sort(); double p = 0.75*allData.Count; int idQ3 = (int)Math.Ceiling(p) - 1; double Q3 = allData[idQ3]; Now, I would like to find a way of counting this without storing the data in some separate variable. The best solution would be to count some parameters od mid-results for first row and then adjust it step by step for next rows. Note: These datasets are really big (ca 5000 elements in each row) The Q3 can be estimated, it doesn't have to be an exact value. I call the portions of data "rows", but they can have different leghts! Usually it varies not so much (+/- few hundred samples) but it varies! This question is similar to this one: http://stackoverflow.com/questions/1058813/on-line-iterator-algorithms-for-estimating-statistical-median-mode-skewness But I need to count quantiles. ALso there are few articles in this topic, i.e.: http://web.cs.wpi.edu/~hofri/medsel.pdf http://portal.acm.org/citation.cfm?id=347195&dl But before I would try to implement these, I wanted to ask you if there are maybe any other, qucker ways of counting the 0.25/0.75 quantiles?

    Read the article

  • Finding ALL positions of a substring in a large string in C#

    - by Tommy
    Alright, so what i have, is a large string i need to parse, and what i need to happen, is find all the instances of extract"(me,i-have lots. of]punctuation, and store them to a list. So say this piece of string was in the beginning and middle of the larger string, both of them would be found, and their index's would be added to the List. and the List would contain 0 and the other index whatever it would be. Ive been playing around, and the string.IndexOf does almost what i'm looking for, and ive written some code. But i cant seem to get it to work: List<int> inst = new List<int>(); int index = 0; while (index < source.LastIndexOf("extract\"(me,i-have lots. of]punctuation", 0) + 39) { int src = source.IndexOf("extract\"(me,i-have lots. of]punctuation", index); inst.Add(src); index = src + 40; } inst = The list source = The large string Any better ideas?

    Read the article

  • Including/Organzing HTML in large javascript project

    - by Bill Zimmerman
    Hi, I've a got a fairly large web app, with several mini applets on each page. These applets are almost always identical jquery apps. I am looking for advice on how I should organize/include smaller parts of these jquery apps within my larger project. For example, each app has several independent tabs. If possible, I would like to store each of the tabs as a seperate .html file because this makes development easier. My requirements are: 1) All of the html 'tabs' are loaded on the clients end when the page loads. I would like to avoid any delays by dynamically requesting the tab html. 2) If possible, I would like to minimize the raw data sent. For example, it would be preferable to send each tab 1 time, instead of sending each tab 10 times if there are ten applets on that page. Questions: 1) What are my options for 'including' the HTML files / javascript code 2) Any tips for keeping my development simple in this situation? Surely there has to be a better way than just editing one massive html file when working with large pages.

    Read the article

  • Loading/Displaying large amount of data on webpage.

    - by jb
    I have a webpage which contains a table for displaying a large amount of data (on average from 2,000 to 10,000 rows). This page takes a long time to load/render. Which is understandable. The problem is, while the page is loading the PCs memory usage skyrockets (500mb on my test system is in use by iexplorer) and the whole PC grinds to a halt until it has finished, which can take a minute or two. IE hangs until it is complete, switching to another running program is the same. I need to fix this - and ideally i want to accomplish 2 things: 1) Load individual parts of the page seperately. So the page can render initially without the large data table. A loading div will be placed there until it is ready. 2) Dont use up so much memory or local resources while rendering - so at least they can use a different tab/application at the same time. How would I go about doing both or either of these? I'm an applications programmer by trade so i am still a little fizzy on the things I can do in a web environment. Cheers all.

    Read the article

  • how to get large pictures from photo album via facebook graph api

    - by Nav
    I am currently using the following code to retrieve all the photos from a user profile FB.api('/me/albums?fields=id,name', function(response) { //console.log(response.data.length); for (var i = 0; i < response.data.length; i++) { var album = response.data[i]; FB.api('/' + album.id + '/photos', function(photos) { if (photos && photos.data && photos.data.length) { for (var j = 0; j < photos.data.length; j++) { var photo = photos.data[j]; // photo.picture contain the link to picture var image = document.createElement('img'); image.src = photo.picture; document.body.appendChild(image); image.className = "border"; image.onclick = function() { //this.parentNode.removeChild(this); document.getElementById(info.id).src = this.src; document.getElementById(info.id).style.width = "220px"; document.getElementById(info.id).style.height = "126px"; }; } } }); } }); but, the photos that it returns are of poor quality and are basically like thumbnails.How to retrieve larger photos from the albums. Generally for profile picture I use ?type=large which returns a decent profile image but the type=large is not working in the case of photos from photo albums and also is there a way to get the photos in zip format from facebook once I specify the photo url as I want users to be able to download the photos.

    Read the article

  • How to understand existing projects

    - by John
    Hi. I am a trainee developer and have been writing .NET applications for about a year now. Most of the work I have done has involved building new applications (mainly web apps) from scratch and I have been given more or less full control over the software design. This has been a great experience however, as a trainee developer my confidence about whether the approaches I have taken are the best is minimal. Ideally I would love to collaborate with more experienced developers (I find this the best was I learn) however in the company I work for developers tend to work in isolation (a great shame for me). Recently I decided that a good way to learn more about how experienced developers approach their design might be to explore some open source projects. I found myself a little overwhelmed by the projects I looked at. With my level of experience it was hard to understand the body of code I faced. My question is slight fuzzy one. How do developers approach the task of understanding a new medium to large scale project. I found myself pouring over lots of code and struggling to see the wood for the trees. At any one time I felt that I could understand a small portion of the system but not see how its all fits together. Do others get this same feeling? If so what approaches do you take to understanding the project? Do you have any other advice about how to learn design best practices? Any advice will be very much appreciated. Thank you.

    Read the article

  • PHP: problem rendering large images (error 321)

    - by JP19
    Hi ... its me again with a php problem :) Following is part of my PHP script which is rendering JPEG images. ... $tf=$requested_file; $image_type="jpeg"; header("Content-type: image/${image_type}"); $CMD="\$image=imagecreatefrom${image_type}('$tf'); image${image_type}(\$image);"; eval($CMD); exit; ... There is no syntactical error, because above code is working fine for small images, but for large images, it gives: Error 321 (net::ERR_INVALID_CHUNKED_ENCODING): Unknown error. in the browser. To be sure, I created two images using imagemagick from same source image - one resized to 10% of original and other 90%. http://mostpopularsports.net/images/misc/ttt10.jpg works http://mostpopularsports.net/images/misc/ttt90.jpg gives Error 301 in the browser. There is a related question with solution posted by OP here Error writing content through Apache. but I cannot understand how to make the fix. Can someome help me with it? I have looked at the headers in Chrome. For the first request, everything is fine. For the second request - the request headers are all garbled. Both images are jpeg (as they are created from imagemagick. But still to be sure I checked): misc/ttt10.jpg: JPEG image data, JFIF standard 1.01 misc/ttt90.jpg: JPEG image data, JFIF standard 1.01 Finally, the way I fixed is, remove the Transfer-Encoding: chunked header from the response. [This header was sent by apache only when the data was large enough]. (I had an internal proxy, so did it in the proxy script - otherwise one may need to do it in apache settings). There were some good answers and I have selected the one that helped me solve the problem best. thanks JP

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >