Search Results

Search found 21263 results on 851 pages for 'website deployment'.

Page 411/851 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • How do I package this vbscript as a msi for Group Policy

    - by TheCleaner
    I had a developer that is no longer with us create an msi to do this for me, but the package is outdated now and we need to deploy new files. Basically I need to do the following: Take the code at the bottom of this question and deploy it to all users as a software install package in Group Policy. I don't want to use a computer startup script because I don't want this to run at every login...just once to install and be done. How can I take the below and turn it into an msi for deployment through GPO? @echo off delete "C:\Windows\Downloaded Program Files\jdeexpimp.inf" delete "C:\Windows\Downloaded Program Files\jdeexpimpU.ocx" delete "C:\Windows\Downloaded Program Files\jdewebctls.inf" delete "C:\Windows\Downloaded Program Files\jdewebctlsU.ocx" copy "\\tuldc01\EOneActiveXapplets\ActiveX898\jdeexpimpU\*" "C:\Windows\Downloaded Program Files\" copy "\\tuldc01\EOneActiveXapplets\ActiveX898\jdewebctlsU\*" "C:\Windows\Downloaded Program Files\" regsvr32 "C:\Windows\Downloaded Program Files\jdeexpimpU.ocx" regsvr32 "C:\Windows\Downloaded Program Files\jdewebctlsU.ocx"

    Read the article

  • Apache and fastcgi - How to secure an Apache server with fastcgi enabled?

    - by skyeagle
    I am running a headless server on Ubuntu 10.x. I am running Apache 2.2. I am writing a fastcgi application for deployment on the server. I remember reading a while back (I could be wrong) that running CGI (and by implication fastcgi) on a server, can provide 'backdoors' for potential attackers - or at the very least, could compromise the server if certain security measurements are not taken. My questions are: what are the security 'gotcha's that I have to be aware of if I am enabling mod_fastcgi on my Apache server? I want to run the fastcgi as a specific user (with restricted access) how do I do this?

    Read the article

  • How to minify JS in PHP easily...Or something else

    - by RickyAYoder
    I've done some looking around, but I'm still confused a bit. I tried Crockford's JSMin, but Win XP can't unzip the executable file for some reason. What I really want though is a simple and easy-to-use JS minifier that uses PHP to minify JS code--and return the result. The reason why is because: I have 2 files (for example) that I'm working between: scripts.js and scripts_template.js scripts_template is normal code that I write out--then I have to minify it and paste the minified script into scripts.js--the one that I actually USE on my website. I want to eradicate the middle man by simply doing something like this on my page: <script type="text/javascript" src="scripts.php"></script> And then for the contents of scripts.php: <?php include("include.inc"); header("Content-type:text/javascript"); echo(minify_js(file_get_contents("scripts_template.js"))); This way, whenever I update my JS, I don't have to constantly go to a website to minify it and re-paste it into scripts.js--everything is automatically updated. Yes, I have also tried Crockford's PHP Minifier and I've taken a look at PHP Speedy, but I don't understand PHP classes just yet...Is there anything out there that a monkey could understand, maybe something with RegExp? How about we make this even simpler? I just want to remove tab spaces--I still want my code to be readable. It's not like the script makes my site lag epically, it's just anything is better than nothing. Tab removal, anyone? And if possible, how about removing completely BLANK lines?

    Read the article

  • ASP.NET application - Error when trying to connect to a SQL Server 2008 instance

    - by Pablo Dami
    Hi everyone! Despite that I’m a regular reader of this great forum, this is my first post on it. I believe that this community can help me with the following problem that I have. I’m trying to publish an ASP.NET website over an IIS 6.0 (Windows 2003 Server), and I have some troubles trying to connect to the database. Curiously, I have installed another ASP.NET website into the same IIS 6.0 with the same properties and security parameters and can connect without problems with the same database. The application that works fine is almost the same that the one that can’t connect with SQL Server (actually is the same but with several modifications). I’ll enumarate some information related to the problem: S.O: Windows 2003 Server SQL Server Engine: SQL Server 2008 SQL Server accept remote connections? Yes. ASP.NET version: 2.0.50727 The connections via TCP/IP are enabled to the SQL Server instance? Yes. The corresponding user that I have in the connection string, actually exists into the database with the “owner” role? Yes. ORM Tool used: nHibernate I get the following error when I try to run the aplication into the browser: Error while establishing a connection to the server. When connecting to SQL Server 2005, this failure may occur because the default settings SQL Server does not allow remote connections. (provider: Shared Memory Provider, error: 40 - Could not open a connection to SQL Server) In order to isolate the problem, I made some test. For example, using the web app that works fine I can connect without any problema with the database that uses the web app that can’t. With this evidence I concluded that the problem is within the web app and not into the SQL Server instance. I also google it my problem but sadly I can't find nothing usefull to solve it. If someone can help me I’ll appreciate that. Thank you so much for your time!

    Read the article

  • Wireless technology - which is better: more single radio APs or less dual radio APs?

    - by gert_78
    We are currently talking to vendors of wireless solutions for a wireless deployment in a university campus with some 5000 students. One vendor is offering us a Cisco solution with a WLC 5508 controller and 69 2x2 MIMO Dual-band/Dual radio APs (Aironet AP 1042 model) The other vendor is offering us an Aruba solution with a 3600 controller and 96 2x2 MIMO dual band BUT single radio APs (Aruba AP93) Both vendors are charging 82.000 US$ (support, 3y service contracts, switches and additional required options all included of course) The Aruba vendor is trying to convince me that 96 single radio APs will give us more connection/users/capacity then the 69 dual radio APs. I have my doubts about that and since it is my core competence-domain I wanted to ask here the opinion of people that have a more profound knowledge and experience in this area. When you talk to vendors it's often hard to get objective information. So try to answer only if you are sure and please mention it if you are affiliated with one of the vendors. I appreciate all useful help and want to thank you in advance for the effort!

    Read the article

  • AWS ELB as backend for Varnish Accelerator

    - by addisonj
    I am working on a large deployment on AWS that has high uptime requirements and variable loads throughout the day. Obviously, this is the perfect use case for ELB (Elastic Load Balancer) and autoscaling. However, we also rely on varnish for caching of API calls. My initial instinct was to structure the stack so that varnish uses ELB as a backend which in turn hits an appGroup. Varnish -> ELB -> AppServers However, according to a few sources that isn't possible as ELB constantly changes the IP address of its DNS hostname, which varnish caches on start, meaning changes to the IP won't be picked up by varnish. Reading around however, it looks like people are doing this so I am wondering what workarounds exist? Perhaps a script to reload the vcl periodically? In the case of where this is really just not a good idea, any idea of other solutions?

    Read the article

  • Should I upgrade Exchange 2003 or just upgrade the hardware?

    - by JohnyD
    My organization currently has a 4 y/o Exchange 2003 email server (32-bit, Intel Pentium D @ 3GHz, 3GB RAM). It's run very well over the past 4 years but it is time to upgrade its hardware. This server would handle email for approximately 30 clients, a few OWA users with iPhones. My (somewhat ambiguous) question is, when I receive the new hardware should I build out a new Exchange 2003 deployment or should I look at Exchange 2007 / 2010? I've heard that Exchange 2010 requires Sharepoint 2010 (which I am currently not running). Are there benefits that a small-medium sized business can or can't do without? Am I making a horrible mistake staying with antiquated software? Other details: Exchange 2003 (v6.5 + SP2) single front-end server All opinions and thoughts are very much appreciated.

    Read the article

  • Is this the right way to organize my database tables?

    - by Moss
    So I'm making a website that allows users to build contact lists. So their are users, the users have lists, and the lists have contacts. It seems to me that I need 3 tables for this but I just want to make sure. There would be a User table of course, and then a "List of Lists" table that has the username, and listname, as primary key along with whatever other info we want to attach to the lists as a whole. Finally, for lack of a better word, the List table which would again have the username/listname p.k., then the contact ID and notes and such that the user attaches to that contact on that specific list. I hope that is a clear explanation. For some reason I feel unsure about this arrangement. For one thing if the website becomes popular the List table could swell to billions of rows. And it also feels a little weird that everybody's list info is all jumbled up in the same table. I suppose I could create separate tables for each user and even for each list but that seems like a bad idea for other reasons. My db explanation assumes I can use foreign keys on my tables which at the moment isn't actually an option. If I can't get InnoDB tables enabled I will probably use ID's for the lists instead of depending on a compound key. Maybe I should do this anyway?

    Read the article

  • A way to enable a LaunchDaemon to output sound?

    - by Varun Mehta
    I have a small Foundation application that checks a website and plays a sound if it sees a certain value. This application successfully plays a sound when I run it as my user from the Terminal. I've configured this app to run as a LaunchDaemon, with the following plist: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>org.myorg.appidentifier</string> <key>ProgramArguments</key> <array> <string>/Users/varunm/path/to/cli/application</string> </array> <key>KeepAlive</key> <true/> <key>RunAtLoad</key> <true/> </dict> </plist> When I have this service launched I can see it successfully read in and log values from the website, but it never generates any sound. The sound files are located in the same directory as the binary, and I use the following code: NSSound *soundToPlay = [[NSSound alloc] initWithContentsOfFile:@"sound.wav" byReference:NO]; [soundToPlay setDelegate:stopper]; [soundToPlay play]; while (g_keepRunning) { [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1.0]]; } [soundToPlay setCurrentTime:0.0]; Is there any way to get my LaunchDaemon application to play sound? This machine gets run by different people, and sometimes has no one logged in, which is why I have to configure it as a LaunchDaemon.

    Read the article

  • need a near 100% uptime third-party web-accesible hosting for static web resources

    - by Jared Henderson
    I hope this makes sense: my business sells a website template, we currently have about 10,000 users. For various reasons that are unimportant to this question, I try to keep the file size of the zipped template we give them as small as possible. Because of this, I have taken a bunch of images and a couple of static files used by the template and moved them to external hosting. They are referenced by absolute URL in the css and markup, instead of shipping all of those images and files with every template. So, basically 10,000+ and growing users are requesting images and files from a third-party host. I don't use my own webhosting for this because I still kind of use a medium-cheap shared hosting for my website, and if it goes down, 10,000+ users are potentially effected. Currently I'm having the template directly access files inside of an open-source google-code project that I created for just this purpose. But, that seems like a bastardization of what a google-code repository is for, and plus, google code (i've found out) often spews 502 bad gateway errors for hours at a time. So, anyway, my question is: where is the right kind of place to host these? Obviously I'm willing to pay. My main needs are speed and uptime, since the images and files are being requested from thousands of different websites every day. Is this something that I should use Amazon S3 for? I'm guessing there's some kind of service exactly for this kind of need, but I'm at a loss to figure out what it is.

    Read the article

  • Fabric and cygwin don't work with windows UNC paths

    - by tcoopman
    I have some strange problems with fabric deployment to Windows Server 2008r2. The thing I try to accomplish is to copy some files to a shared folder with a fabric script (this script does a lot of other things too, but only this step gives me problems). This is the problem: When I try to access a UNC(Universal Naming convention) path I always get access denied kind of answers if I run the script in fabric. When I run the command in an ssh prompt (same user) it works fine. Examples: cmd: robocopy f:/.... //share result: in ssh this works fine, in fabric I get "Logon failure: the user has not been granted the requested logon type aat this computer." cmd: cd //share result: in ssh this works fine, in fabric I get "//share: Not a directory" Further information: uname -a and whoami return exact the same thing in fabric and ssh. I also tried things like mount, net use, but these commands all have kind of the same problem.

    Read the article

  • replacing 3 Cisco Catalyst 4500

    - by hoberion
    Our network supplier recommends replacing our 3 cisco catalyst 4500's because they are EOL and dont speak OSPF (which we really want) Its not my area of expertise so I cant say for sure if we really need to replace these units but for my company the estimated costs of 250K euro is a huge problem. Is there any way to cut down on costs (without moving from cisco devices), I heard the 4500´s can speak ospf but would need an upgrade of sorts? edit: version: IOS (tm) Catalyst 4000 L3 Switch Software (cat4000-I9K91S-M), Version 12.2(20)EW, EARLY DEPLOYMENT RELEASE SOFTWARE (fc1) supervisor: WS-X4013+ Cisco Catalyst 4500 Series Supervisor Engine II-Plus density: WS-X4306-GB Cisco Catalyst 4500 Gigabit Ethernet Module, 6 Ports (GBIC) WS-X4306-GB Cisco Catalyst 4500 Gigabit Ethernet Module, 6 Ports (GBIC) WS-X4548-GB-RJ45 Cisco Catalyst 4500 Enhanced 48-Port 10/100/1000 Module (RJ-45) WS-X4548-GB-RJ45 Cisco Catalyst 4500 Enhanced 48-Port 10/100/1000 Module (RJ-45) WS-X4548-GB-RJ45 Cisco Catalyst 4500 Enhanced 48-Port 10/100/1000 Module (RJ-45)

    Read the article

  • compTIA-AT EXAM

    - by SysPrep2010
    Hello everyone, I have been in the IT field only for two years. I have been dealing with servers, firewalls, routers, switches, backup servers, and desktop. For the desktop, i have been dealing with WDS (Window deployment services). Not a lot of hardware. My question is this, is it really important to have an AT cert under your belt. I dont see the point anymore. When a desktop goes down, what have been seeing, they just buy a new one. I mean I can rebuild systems they are fun, but I haven't in a while?

    Read the article

  • PHP Single Sign On (SSO) generating new session id

    - by bigstylee
    I am trying to create a single sign on process. The method I have implemented makes use of storing session data in a database. When a new user comes to the website (www.example2.com) a table of authentication is checked. As this is their first visit to the website, there will be no match. The browser is redicted to the authentication server www.example1.com/authenticate.php?session_id=ABC123 where ABC123 represents the session id created on www.example2.com. THe session id which is then generated on www.example1.com is stored along side the session id using the parameter set in the URL. The user is then redirected back to the www.example2.com and a match of session ids should be found. This WAS working fine in FireFox but when I tried it in Chrome I noticed that the session id being generated when the browser is redirected back to www.example2.com is a new session id. As a result an infinite loop is created. This behaviour has not manifested itself in FireFox aswell. What is causing the new session id to be generated? More importantly, what can I do to stop it? Thanks in advance! EDIT I had a logically error that was causing an infinite loop. This now works fine again in FireFox but the infinite loop is still occuring in Chrome and Internet Explorer.

    Read the article

  • Rename Exchange Server 2003 Domain

    - by Debasish Pramanik
    Hi All: We have the following exchange server deployment Windows 2003 Server + Domain Controller + Exchange Server 2003 The domain name was X.COM. everything was working fine but due to some reason we need to rename the domain name to Y.COM. The rename of Domain went well but the rename of Exchange Server 2003 is having issues. When we run the XDR-Fixup we get the following error Operation failed: Could not get 'configurationNamingContext' on RootDSE of this server. Let me know if you have any idea on this.

    Read the article

  • Is CakePhp 'standards compliant' when generating HTML, Forms, etc?

    - by dtj
    So I've been reading a lot of "Designing with Web Standards" and really enjoying it. I'm a big CakePhp user, and as I look at the source for various form elements that Cake creates with its FormHelper, I see all sorts of extraneous In the book, he promotes semantic HTML, and writing your markup as simple / generic as possible. So my question is, am I better writing my own HTML in these situations? I really want to work in compliance with XHTML and CSS standards, and it seems I'd spend just as much time (if not more) cleaning up Cakes HTML, when I could just write my own thoughts? p.s. Here's an example in an out of the box form that CakePhp generates using the FormHelper <form id="CompanyAddForm" method="post" action="/omni_cake/companies/add" accept-charset="utf-8"><div style="display:none;"><input type="hidden" name="_method" value="POST" /></div> <div class="input text required"><label for="CompanyName">Name</label><input name="data[Company][name]" type="text" maxlength="50" id="CompanyName" /></div> <div class="input text required"><label for="CompanyWebsite">Website</label><input name="data[Company][website]" type="text" maxlength="50" id="CompanyWebsite" /></div> <div class="input textarea"><label for="CompanyNotes">Notes</label><textarea name="data[Company][notes]" cols="30" rows="6" id="CompanyNotes" ></textarea></div> <div class="submit"><input type="submit" value="Submit" /></div></form>

    Read the article

  • Installing Domain Controller on Hyper-V Host

    - by MichaelGG
    Given a resource limited setup consisting of 2 host machines (HyperV-01 and HyperV-02), is it OK to put the domain controllers in parent partition, instead of their own VM? The main reason is that if the DCs go into a child partition, starting from cold on both machines could lead to a bit of an issue, as there'd be no DCs around until well after both parents have booted. I'm guessing this might cause undesirable effects. Am I correct to be worried about joining the host systems to a domain that's only on VMs? The biggest drawback I've heard so far is that if AD gets heavily used, its resources could cut into HyperV's. I'm not concerned about that for this deployment. Any other suggestions? (Besides finding a 3rd machine and running AD on it.)

    Read the article

  • Problems compiling libjingle/gtk+-2.0 for Mac OS X

    - by mindthief
    Hi All, I'm trying to compile libjingle on Mac OSX Snow Leopard. The INSTALL file said to './configure', 'make' and 'make install', as usual. But make fails for me. Initially it gave some messages indicating that I didn't have pkg-config installed (I guess OSX doesn't come with it installed?), so I downloaded pkg-config from http://pkgconfig.freedesktop.org/releases/ Now I get this message: Package gtk+-2.0 was not found in the pkg-config search path. Perhaps you should add the directory containing `gtk+-2.0.pc' to the PKG_CONFIG_PATH environment variable No package 'gtk+-2.0' found I tried to install gtk by using the script at SourceForge: http://sourceforge.net/projects/gtk-osx/ (this is the website pointed to by the gtk website) Running the script didn't really seem to do anything, here is the output: $./gtk-osx-build-setup.sh Checking out jhbuild (2.27.3) from git... From git://git.gnome.org/jhbuild * tag 2.27.3 -> FETCH_HEAD Installing jhbuild... Installing jhbuild configuration... Installing gtk-osx moduleset files... Done. $ And I still get that error message about "Package gtk+-2.0 not found" while make-ing libjingle. Help will be appreciated, thanks!

    Read the article

  • How to recover from failed Mysql schema update, with replication?

    - by OmerGertel
    I have two MySQL servers configured with master-slave replication. Before we deploy a new application version we: 1) STOP SLAVE 2) Take a MySQL dump of the slave. However, if a mistake is done during the deployment of the new schema version (a table is dropped by mistake, for example), having the slave intact doesn't help. Our service is write-intensive, so we can't turn it back up until we have a master working. If we now load the mysql dump back into the master, it will take a long time during which our service remains down. What is the best-practice to recover from such a mistake? How can I setup the system so I can easily promote the slave, turn on our service and only then tend to the broken database? Mainly, I'm worried with re-syncing the slave and the master after changes are done on the slave.

    Read the article

  • How can I prevent/make it hard to download my flash video?

    - by Billy
    I want to at least prevent normal users to download my flash video. What's the best way to do it? Create a httphandler, add a token (e.g. timeid), set the cache control to no-cache so that only the users with correct token can view the correct video. Is that feasible? It is the requirement from client that the video should not be downloaded by users and should be watched only in the particular website. I want to know if this works: http://www.somesite.com/video.swf?time=1248319067 Server will generate a token(time in the above example) so that user can only have one request to this link. If the user wants to watch the video again, he needs to go to our website to get the token again. Is this okay to prevent novices from downloading? I can't download this flash video by the downloadHelper firefox plugin: http://news.bbc.co.uk/2/hi/americas/8164177.stm Updated (13:49 pm 2009/07/23): The above file can be downloaded using some video download software. The video files of following Chinese sites are well protected (I can't download it using many video download software): http://programme.tvb.com/drama/abrideforaride/video/ Do you know how it is done?

    Read the article

  • cookies handling on webrequest and response

    - by manish patel
    I have created a application that has a function mainpost. It is created to post data on a https sites. Here I want to handle cookies in this function. How can I do this task? public string Mainpost(string website, string content) { // this is what we are sending string post_data = content; // this is where we will send it string uri = website; // create a request HttpWebRequest request = (HttpWebRequest) WebRequest.Create(uri); request.KeepAlive = false; request.ProtocolVersion = HttpVersion.Version10; request.Method = "POST"; // turn our request string into a byte stream byte[] postBytes = Encoding.ASCII.GetBytes(post_data); // this is important - make sure you specify type this way request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = postBytes.Length; Stream requestStream = request.GetRequestStream(); // now send it requestStream.Write(postBytes, 0, postBytes.Length); requestStream.Close(); // grab te response and print it out to the console along with the status // code HttpWebResponse response = (HttpWebResponse)request.GetResponse(); string str = (new StreamReader(response.GetResponseStream()).ReadToEnd()); Console.WriteLine(response.StatusCode); return str; }

    Read the article

  • Libraries and pseudocode for physical Dashboard/Status board

    - by dani
    OK, so I bought a 46" screen for the office yesterday, and with the imminent risk of being accused for setting up an "elaborate World Cup procrastination scheme", I'd better show my colleagues what it's meant for ;) Looking at my simple sketch, and at these great projects from which I was inspired, I would like to get some input on the following: Pseudocode for the skeleton: As some methods should be called every 24 hours ("Today's date in the heading"), others at 60 second intervals ("Twitter results"), what would be a good approach using JavaScript (jQuery) and PHP? EDIT: Alsciende: I can agree that #1 and #8 are too vague. Therefore I remove #8 and try to clarify #1: With "Pseudocode for the skeleton", I basically mean could this be done entirely using JavaScript timers and how would you set up the various timers? Library for Google Analytics: Which libraries support the Google Analytics API and can produce neat charts. Preferably HTML5, JavaScript-based like Protovis. Library for Twitter: Which libraries would you recommend for fetching twitter search results and latest tweets from profiles. Libraries for Typography/CSS/HTML5: Trying to learn some HTML5 etc. in the process, please advice on any other typography/css libraries that could be of relevance. Scraping/Parsing? I'll give you a concrete example: Trying to fetch today's menu from this restaurant's website, how would you go about? (it's in Swedish - but you get the point - sorry ;) ) Real-time stats? I'm using the WassUp-plugin for WordPress to track real-time visitors on our website. Other logging software (AWStats etc.) is probably also installed on the webserver. Any ideas on how to extract information from these and present in real-time on the dashboard? Browser choice? Which Browser and OS would you pick? Stable, Full-screen, HTML5.

    Read the article

  • getting maps to accept a dynamically generated KML file?

    - by arc
    Hi. I have a button that launches the google maps app on my device via an intent. I want to be able to pass it a php page that generates a KML file. I have done this on a website before using the googlemaps api in JS - but it doesn't seem to work on Android. My php file is as follows; <?php echo '<kml xmlns="http://www.google.com/earth/kml/2">'; echo '<Placemark>'; echo '<name>Google Inc.</name>'; echo '<description>1600 Amphitheatre Parkway, Mountain View, CA 94043</description>'; echo '<Point>'; echo '<coordinates>-122.0841430, 37.4219720, 0</coordinates>'; echo '</Point>'; echo '</Placemark>'; echo '</kml>'; ?> Launching with: final Intent myIntent = new Intent(android.content.Intent.ACTION_VIEW, Uri.parse("geo:0,0?q=http://website.com/kml_gen.php")); startActivity(myIntent); It launches maps, finds the file - but won't display it 'because it contains errors'. Is this just not possible, or are there other ways to construct the intent that might work?

    Read the article

  • How do I track down sporadic ASP.NET performance problems in a production environment?

    - by Steve Wortham
    I've had sporadic performance problems with my website for awhile now. 90% of the time the site is very fast. But occasionally it is just really, really slow. I mean like 5-10 seconds load time kind of slow. I thought I had narrowed it down to the server I was on so I migrated everything to a new dedicated server from a completely different web hosting company. But the problems continue. I guess what I'm looking for is a good tool that'll help me track down the problem, because it's clearly not the hardware. I'd like to be able to log certain events in my ASP.NET code and have that same logger also track server performance/resources at the time. If I can then look back at the logs then I can see what exactly my website was doing at the time of extreme slowness. Is there a .NET logging system that'll allow me to make calls into it with code while simultaneously tracking performance? What would you recommend?

    Read the article

  • Updating a local sqlite db that is used for local metadata & caching from an service?

    - by Pharaun
    I've searched through the site and haven't found a question/answer that quite answer my question, the closest one I found was: Syncing objects between two disparate systems best approach. Anyway to begun, because there is no RSS feeds available, I'm screen scrapping a webpage, hence it does a fetch then it goes through the webpage to scrap out all of the information that I'm interested in and dumps that information into a sqlite database so that I can query the information at my leisure without doing repeat fetching from the website. However I'm also storing various metadata on the data itself that is stored in the sqlite db, such as: have I looked at the data, is the data new/old, bookmark to a chunk of data (Think of it as a collection of unrelated data, and the bookmark is just a pointer to where I am in processing/reading of the said data). So right now my current problem is trying to figure out how to update the local sqlite database with new data and/or changed data from the website in a manner that is effective and straightforward. Here's my current idea: Download the page itself Create a temporary table for the parsed data to go into Do a comparison between the official and the temporary table and copy updates and/or new information to the official table This process seems kind of complicated because I would have to figure out how to determine if the data in the temporary table is new, updated, or unchanged. So I am wondering if there isn't a better approach or if anyone has any suggestion on how to architecture/structure such system?

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >