Search Results

Search found 23021 results on 921 pages for 'process monitoring'.

Page 636/921 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • Hard to append a table with many records into another without generating duplicates

    - by Bill Mudry
    I may seem to be a bit wordy at first but for the hope it will be easier for all of you to understand what I am doing in the first place. I have an uncommon but enjoyable activity of collecting as many species of wood from around the world as I can (over 2,900 so far). Ok, that is the real world. Meanwhile I have spent over 8 years compiling over 5.8 meg of text data on all the woods of the world. That got so large that learning some basic PHP and MySQL was most welcome so I could build a new database driven home for all this research. I am still slow at it but getting there. The original premise was to find evidence of as many species of woods in the world I can. The more names identified, the more successful the project. I have named the project TAXA for ease of conversation (short for Taxonomy). You are most welcome to take a look at what I have so far at www.prowebcanada.com/taxa. It is 95% dynamically driven. So far I am reporting about 6,500 botanical wood names and, as said above, the more I can report, the more successful is the project. I have a file of all the woods in the second largest wood collection in the world, the Tervuren wood collection in the Netherlands with over 11,300 wood names even after cleaning out all duplicates. That is almost twice the number I am reporting now so porting all the new wood names from Tervuren to the 'species' table where I keep the reported data would be a major desirable advancement in the project. At one point I was able to add all the Tervuren records to the species table but over 3,000 duplicates also formed. They were not in the Tervuren file in the first place but represent the same wood names common to both files. It is common sense that there would be woods common to both that when merged would create new duplicates. At one point and with the help of others from another forum, I may very well have finally got the proper SQL statement. When I ran it, though, the system said (semi-amusingly at first) ----- that it had gone away! After looking up on the Net what could have have done this, one reason is that the MySQL timeout lapses and probably because of the large size of files I am running. I am running this on a rented account on Godaddy so I cannot go about trying to adjust any config file. For safety, I copied the tervuren.sql file as tervuren_target.sql and the species.sql file as species_master.sql tp use as working files just to make sure I protect the original files from destruction or damage. Later I can name the species_master back to just species.sql once I am happy all worked well. The species file has about 18 columns in it but only 5 columns match the columns in the Tervuren file (name for name and collation also). The rest of the columns are just along for the ride, so to speak. The common key in both is the 'species_name" columns in both. I am not sure it is at all proper to call one a primary key and the other a foreign key since there really is no relational connection to them. One is just more data for the other and can disappear after, never to be referred to the working code in the application. I have been very surprised and flabbergasted on how hard it can be to append records from one large table into another (with same column names plus others) without generating NEW duplicates in the first place. Watch out thinking that a SELECT DISTINCT statement may do the job because absolutely NO records in the species table must get destroyed in the process and there is no way (well, that I know of) to tell the 'DISTINCT" command this. Yes, the original 'species' table has duplicates in it even before all this but, trust me ---- they have to be removed the long hard way manually record by record or I will lose precious information. It is more important to just make sure no NEW duplicates form through bringing in new names in the tervuren_target.species_name into species.species_name. I am hoping and thinking that a straight SQL solution should work --- except for that nasty timeout. How do I get past that? Could it mean that I may have to turn to a PHP plus SQL method?? Or ..... would I have to break up the Tervuren files into a few smaller ones and run them independently (hope not....)" So far, what seems should be easy has proven to be unexpectedly tricky. I appreciate any help you can give but start from the assumption that this may be harder to do right than it may seem on the surface. By the way --- I am running a quad 64 bit system with Windows 7, so at least I have some fairly hefty power on the client end. I have a direct ethernet cable feeding a cable connection to the Internet. Once I get an algorithm and code working for this, I also have many other lists to process that could make the 'species' table grow even more. It could be equivalent to (ahem) lighting a rocket under my project (especially compared to do this record by record manually)! This is my first time in this forum, so I do not know how I can receive any replies. Do I have to to come back here periodically or are replies emailed out also? It would be great if you CC'd copies to me at billmudry at rogers.com :-) Much thanks for your patience and help, Bill Mudry Mississauga, Ontario Canada (next to Toronto).

    Read the article

  • Using Interfaces in action signature

    - by Dmitry Borovsky
    Hello, I want to use interface in Action signature. So I've tried make own ModelBinder by deriving DefaultModelBinder: public class InterfaceBinder<T> : DefaultModelBinder where T: new() { protected override object CreateModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Type modelType) { return base.CreateModel(controllerContext, bindingContext, typeof(T)); } } public interface IFoo { string Data { get; set; } } public class Foo: IFoo /*other interfaces*/ { /* a lot of */ public string Data { get; set; } } public class MegaController: Controller { public ActionResult Process( [ModelBinder(typeof(InterfaceBinder))]IFoo foo){/bla-bla-bla/} } But it doesn't work. Does anybody have idea how release this behaviour? And yes, I know that I can make my own implementation of IModelBinder, but I'm looking for easier way.

    Read the article

  • Best tool to check and ensure PDF/A compatibility under Linux

    - by Sven Lilienthal
    I am working on an online portal, where researchers can upload their research papers. One requirement is, that all PDFs are stored in PDF/A-format. As I can't rely on the users to generate PDF/A conforming documents, I need a tool to check and convert standard PDFs into PDF/A format. What is the best tool you know of? Price Quality Speed Available APIs Open-source tools would be prefered, but a search revealed none. iText can create PDF/a, but converting isn't easy to do, as you have to read every page and copy it to a new document, losing all bookmarks and annotations in this process. (At least as far as I know, if you know of an easy solution, let me know). APIs should be available for either PHP, Java or a command-line-tool should be provided. Please do not list either GUI-only or Online-only solutions.

    Read the article

  • IE 8 Caching Problem

    - by Jeff Catania
    One of my javascript sources had an extra comma that was throwing an error in IE8. So I opened up my editor, deleted the comma, and saved. I reloaded IE8, but it was still pulling the old js file. I deleted everything in "Delete Browsing History...", and restarted the browser. It is still pulling the old file. I even set up a log on my server to show whenever the js file was requested. When reloading with IE, the js file is never requested. I tried doing the same process in Chrome and FF, and it pulled the new file and logged properly on the server. Is there some other cache that I am failing to clear in IE that would cause this problem?

    Read the article

  • Packaging LAME.exe with a C# project

    - by user347821
    Please forgive my noob-ness on this, but how do I package LAME.exe w/ a C# setup project? Currently, I have LAME being called like: //use stringbuilder to create arguments var psinfo = new ProcessStartInfo( @"lame.exe") { Arguments = sb.ToString(), WorkingDirectory = Application.StartupPath, WindowStyle = ProcessWindowStyle.Hidden }; var p = Process.Start( psinfo ); p.WaitForExit(); This works in debug and release modes on the development machine, but when I create a setup project for this, it never creates the MP3. LAME and the compiled code reside in the same directory when installed. Help!

    Read the article

  • WCF Windows service permissions problem

    - by Elad
    I have created a WCF service and hosted it using Windows Services host. To install the project I created an installation project (as described here). In the tutorial, it says to define in the ProjectInstaller.cs the serviceProcessInstaller1 Account property to be Network Service. When using this setting the service did not started on the server. When I tried to start the process manually, it immediately return to stopped state. After when I changed the Account to LocalSystem the service works properly. My questions are: Any ideas why it won't work with Network Service account? What are the security implications of using a server with LocalSystem account? This server is used locally in the intranet as a reporting server for other servers.

    Read the article

  • ajax response redirect problem

    - by zurna
    When my member registration form correctly filled in and submitted, server responds with redirect link. But my ajax does not redirect the website. I do not receive any errors, where is my mistake? <script type="text/javascript"> $(document).ready(function() { $("[name='submit']").click(function() { $.ajax({ type: "POST", data: $(".form-signup").serialize(), url: "http://www.refinethetaste.com/FLPM/content/myaccount/signup.cs.asp?Process=Add2Member", success: function(output) { if (output.Redirect) { window.location.href = output.Redirect; } else { $('.sysMsg').html(output); } }, error: function(output) { $('.sysMsg').html(output); } }); }); }); </script> asp codes: If Session("LastVisitedURL") <> "" Then Response.Redirect Session("LastVisitedURL") Else Response.Redirect "?Section=myaccount&SubSection=myaccount" End If

    Read the article

  • Legality, terms of service for performing a web crawl

    - by Berlin Brown
    I was going to crawl a site for some research I was collecting. But, apparently the terms of service is quite clear on the topic. Is it illegal to now "follow" the terms of service. And what can the site normally do? Here is an example clause in the TOS. Also, what about sites that don't provide this particular clause. Restrictions: "use any robot, spider, site search application, or other automated device, process or means to access, retrieve, scrape, or index the site" It is just research? Edit: "OK, from the standpoint of designing an efficient crawler. Should I provide some form of natural language engine to read terms of service and then abide by them."

    Read the article

  • Optimising Database Calls

    - by Dwaine Bailey
    I have a database that is filled with information for films, which is (in turn) read in to the database from an XML file on a webserver. What happens is the following: Gather/Parse XML and store film info as objects Begin Statement For every film object we found: Check to see if record for film exists in database If no film record, write data for film Commit Statement Currently I just test for the existence of a film using (the very basic): SELECT film_title FROM film WHERE film_id = ? If that returns a row, then the film exists, if not then I need to add it... The only problem is, is that there are many many hundreds of records in the database (lots of films!) and because it has to check for the existence of a film in the database before it can write it, the whole process ends up taking quite a while (about 27 seconds for 210 films) Is there a more efficient method of doing this, or just any suggestions in general? Programming Language is Objective-C, database is in sqlite3 Thanks, Dwaine

    Read the article

  • Potential for SQL injection here?

    - by Matt Greer
    This may be a really dumb question but I figure why not... I am using RIA Services with Entity Framework as the back end. I have some places in my app where I accept user input and directly ask RIA Services (and in turn EF and in turn my database) questions using their data. Do any of these layers help prevent security issues or should I scrub my data myself? For example, whenever a new user registers with the app, I call this method: [Query] public IEnumerable<EmailVerificationResult> VerifyUserWithEmailToken(string token) { using (UserService userService = new UserService()) { // token came straight from the user, am I in trouble here passing it directly into // my DomainService, should I verify the data here (or in UserService)? User user = userService.GetUserByEmailVerificationToken(token); ... } } (and whether I should be rolling my own user verification system is another issue altogether, we are in the process of adopting MS's membership framework. I'm more interested in sql injection and RIA services in general)

    Read the article

  • Compact Framework : Read a SQL CE database on a PDA from a PC

    - by CF_Maintainer
    Hello, I have tasked with upgrading a CF Framework 1.1 suite of apps. Currently, the PC starts a server [after confirming via RAPI that the device exists and is connected] and spawns a app on the PDA as the client. The client process on the PDA talks with the db on the PDA and returns records to the PC app [using SQL CE 2.0. OpenNETCF 1.4 for communication/io]. I have a chance to upgrade the PC and PDA suite of apps to Framework 3.5 & CF 3.5 respectively. Due to a business requirement, I cannot get rid of workflow requiring the PC app to show a preview of the work done on the PDA. Question : Are there better ways to achieve the above in general with the constraints I have? I would really appreciate any Ideas/advice.

    Read the article

  • HTML - Word Doc Images

    - by Michael
    Okay I have roughly 150 + pages of procedures all written in MS word. The person who wrote the procedures did an excellent job of recording the process how to perform specific tasks. This individual went though and created screen shots a MS word doc. There are approx 2 screen shots per page. So it is roughly 300 images and I do not want to recreate the wheel. Does anyone know a quick way of handling the images? The written portion is pretty straightforward but the images is what I am struggling with. Regards, Mike

    Read the article

  • Self updating application install with WIX?

    - by Brian ONeil
    I am writing an application that needs to be installed on a large number of desktops and also needs to update itself. We are looking at WIX for creating the installation. I have used ClickOnce and it is not a good solution for this install. WIX seems to fit, but there is no good process for auto update that I have found. I have looked at ClickThrough, but it doesn't seem ready for prime time yet. Does anyone have another good solution to use with WIX (or maybe another install program) to auto update an application install?

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Is it possible to control the destination folder when checking out a project from VSS 2005?

    - by swolff1978
    We are currently using VSS 2005 for source control - and please let me start by saying, I've read a lot of posts on Stackoverflow and I realize VSS is the devil. That being said... its what we have to work with now and I have a question about the checkout process. We have the code organized in a certain hierarchy on the vss server, but when we do a checkout we don't need that same hierarchy on our machines. Is there a way to control how visual studio 2008 and vss 2005 create the checkout destination folder so that I don't end up with the code being 9 folders deep on MY machine?

    Read the article

  • How do you sort files numerically?

    - by Zachary Young
    Hello all, First off, I'm posting this because when I was looking for a solution to the problem below, I could not find one on stackoverflow. So, I'm hoping to add a little bit to the knowledge base here. I need to process some files in a directory and need the files to be sorted numerically. I found some examples on sorting--specifically with using the lamba pattern--at wiki.python.org, and I put this together: #!env/python import re tiffFiles = """ayurveda_1.tif ayurveda_11.tif ayurveda_13.tif ayurveda_2.tif ayurveda_20.tif ayurveda_22.tif""".split('\n') numPattern = re.compile('_(\d{1,2})\.', re.IGNORECASE) tiffFiles.sort(cmp, key=lambda tFile: int(numPattern.search(tFile).group(1))) print tiffFiles I'm still rather new to Python and would like to ask the community if there are any improvements that can be made to this: shortening the code up (removing lambda), performance, style/readability? Thank you, Zachary

    Read the article

  • building mono from svn - android target

    - by Jeremy Bell
    There were patches made to mono on trunk svn to support android. My understanding is that essentially instead of Koush's system which builds mono using the android NDK build system directly, these patches add support for the android NDK using the regular mono configure.sh process. I'd like to play around with this patch, but not being an expert in the mono build system, I have no idea how to tell it to target the android NDK, or even where to look. I've been able to build mono from SVN using the default target (linux) on Ubuntu, but no documentation on how to target android was given with the patches. Since anyone not submitting or reviewing a patch is generally ignored on the mono mailing list, I figured I'd post the question here.

    Read the article

  • Magento: How do I retrieve values from fields submitted with the payment method?

    - by Joseph
    Ok. This is getting a little frustrating. I am trying to create a custom payment module for Magento. The purpose is to use Authorize.net's CIM so that we don't have to worry so much about PCI compliance. The issue I am having is that the users need to be able to access their previous credit cards and use those for purchasing. I have the previous cards being stored in the database. They are also being displayed in the form in the checkout process. My issue comes when I click continue after selecting the payment method. How do I get the values I submitted in the form? Specifically, the value of the radio button the saved code is attached to? I am not sure what if any code is needed for me to post, so let me know if you need anything in particular. Thanks.

    Read the article

  • Deliberately adding bugs to assess QA processes

    - by bgbg
    How do you know that as many bugs as possiblle have been discovered and solved in a program? Couple of years ago I have read a document about debugging (I think it was some sort of HOWTO). Among other things, that document described a technique in which the programming team deliberately adds bugs into the code and passes it to the QA team. The QA process is considered completed when all the deliberately known bugs have been discovered. Unfortunately, I cannot find this document, or any similar one with description of this trick. Can someone please point me to such a document?

    Read the article

  • How to build a distributable jar with Ant for a java project having external jar dependencies

    - by Nikunj Chauhan
    I have a Java project in Eclipse with class MainClass having main method in package : com.nik.mypackage. The project also references two external libraries, which I copied in the lib folder in Eclipse and then added to build path using ADD JAR function. The libraries being one.jar and two.jar This library is in lib folder in eclipse and added to the build path. I want to create a executable JAR of the application using ant script. So that user can access my application using command: c:>java -jar MyProject-20111126.jar I know about the Eclipse plugin which directly exports a java application as runnable JAR. But I want to learn ant and the build process so manually want to create the build.xm.

    Read the article

  • (fluxus) learning curve

    - by Inaimathi
    I'm trying to have some fun with fluxus, but its manual and online docs all seem to assume that the reader is already an expert network programmer who's never heard of Scheme before. Consequently, you get passages that try to explain the very basics of prefix notation, but assume that you know how to pipe sound-card data into the program, or setup and connect to an OSC process. Is there any tutorial out there that goes the opposite way? IE, assumes that you already have a handle on the Lisp/Scheme thing, but need some pointers before you can properly set up sound sources or an OSC server? Barring that, does anyone know how to get (for example) the system microphone to connect to (fluxus), or how to get it to play a sound file from disk?

    Read the article

  • Opening read-only OLEDB connection to MS Access back-end database while allowing updates via separat

    - by djdilicious
    I have a back-end MS Access 2002-2003 database which stores blog entries. I created a separate front-end database with the forms for entering blog posts into the backend database. Finally, I have a website utilizing ASP to display the blog entries. The website connects directly to the backend database using an OLEDB connection object. Whenever I open the form for creating a new post in MS Access, loading the blog post page on the website displays the error: Could not use "; file already in use. I would like to be able to display the older blog posts even though the newest one is in the process of being added.

    Read the article

  • How to free virtual memory ?

    - by Mehdi Amrollahi
    I have a crawler application (with C#) that downloads pages from web . The application take more virtual memory , even i dispose every object and even use GC.Collect() . This , have 10 thread and each thread has a socket that downloads pages . I use dispose method and even use GC.Collect() in my application , but in 3 hour my application take 500 MB on virtual memory (500 MB on private bytes in Process explorer) . Then my system will be hang and i should restart my pc . Is there any way that i use to free vitual memory ? Thanks .

    Read the article

  • CodeIgniter based e-shop, shipping and gift address design problem

    - by alexander
    While building an ecommerce platform I have run into design problems. I'm working with the built-in CodeIgniter's cart class. It stores all the cart information in session. Let say that cart has already been filled with products and user clicks checkout. When should I store order in database? Just after that click or after several steps of gathering information and stoing it in session? How to deal with additional features like different shipping methods? Should I add it to the basket first and get additional (gift address) to session? I dont want to store it in database because of the relation between gift address and order is needed and since I dont know what's the ID of the order. I'm puzzled :) Additionally I think its crucial to keep cart aware of shipping methods and additional bought services (by selecting gift address there is an extra fee) because the cart content is just like an reciept? In brief, what is the best practice to process checkout?

    Read the article

  • Looking for actively maintained matrix math library for php

    - by Mnebuerquo
    Does anyone know where I might find a PHP matrix math library which is still actively maintained? I need to be able to do the basic matrix operations like reduce, transpose (including non-square matrices), invert, determinant, etc. This question was asked in the past, then closed with no answers. Now I need an answer to the same question. See these links to related questions: http://stackoverflow.com/questions/428473/matrix-artihmetic-in-php http://stackoverflow.com/questions/435074/matrix-arithmetic-in-php-again I was in the process of installing the pear Math_Matrix library when I saw these and realized it wouldn't help me. (Thanks Ben for putting that comment about transpose in your question.) I can code this stuff myself, but I would make me happier to see that there is a library for this somewhere.

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >