Search Results

Search found 10748 results on 430 pages for 'disk encryption'.

Page 402/430 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

  • How to create a desktop shortcut to a website with website logo as icon?

    - by Wbdvlpr
    Hi I need a solution which allows users to create a desktop shortcut to a website, preferably with website logo as shortcut icon. There are ways users can create shortcut such as drag drop favicon or right click create new shortcut etc. This works but also creates the shortcut with default IE icon on windows. I am trying to avoid this method for this icon and some other reasons. I thought of creating a website-title.URL file, and tell users to download and save file to their desktop, again this works but doesn't solve the icon problem as the website logo (.ico) has to be at local disk and at a pre-specified location [IconFile=path]. I was wondering if it is possible to create a some sort of installer or windows application which users can download from the website. Once its executed by user, it creates this .URL file on user desktop which have IconFile path pre-specified, and copy the website-logo.ico at path specified in .URL file etc. So my questions are - Is it possible to create such a solution? What programming languages can be used to achieve this? Can this installer/app be made to work on non-windows machines as well? From developments point of view how big project it is? if I have to hire a programmer to do this. Please let me know if you need more information. Thanks

    Read the article

  • Running an application from an USB device...

    - by Workshop Alex
    I'm working on a proof-of-concept application, containing a WCF service with console host and client, both on a single USB device. On the same device I will also have the client application which will connect to this service. The service uses the entity framework to connect to the database, which in this POC will just return a list of names. If it works, it will be used for a larger project. Creating the client and service was easy and this works well from USB. But getting the service to connect to the database isn't. I've found this site, suggesting that I should modify machine.config but that stops the XCopy deployment. This project cannot change any setting of the PC, so this suggestion is bad. I cannot create a deployment setup either. The whole thing just needs to run from USB disk. So, how do I get it to run? (The service just selects a list of names from the database, which it returns to the client. If this POC works, it will do far more complex things!)

    Read the article

  • Call 32-bit or 64-bit program from bootloader

    - by user1002358
    There seems to be quite a lot of identical information on the Internet about writing the following 3 bootloaders: Infinite loop jmp $ Print a single character Print "Hello World". This is fantastic, and I've gone through these 3 variations with very little trouble. I'd like to write some 32- or 64-bit code in C and compile it, and call that code from the bootloader... basically a bootloader that, for example, sets the computer up to run some simple numerical simulation. I'll start by listing primes, for example, and then maybe some input/output from the user to maybe compute a Fourier transform. I don't know. I haven't found any information on how to do this, but I can already foresee some problems before I even begin. First of all, compiling a C program compiles it into one of several different files, depending on the target. For Windows, it's a PE file. For Linux, it's a .out file. These files are both quite different. In my instance, the target isn't Windows or Linux, it's just whatever I have written in the bootloader. Secondly, where would the actual code reside? The bootloader is exactly 512 bytes, but the program I write in C will certainly compile to something much larger. It will need to sit on my (virtual) hard disk, probably in some sort of file system (which I haven't even defined!) and I will need to load the information from this file into memory before I can even think about executing it. But from my understanding, all this is many, many orders of magnitude more complex than a 12-line "Hello World" bootloader. So my question is: How do I call a large 32- or 64-bit program (written in C/C++) from my 16-bit bootloader.

    Read the article

  • Caching vector addition over changing collections

    - by DRMacIver
    I have the following setup: I have a largish number of uuids (currently about 10k but expected to grow unboundedly - they're user IDs) and a function f : id - sparse vector with 32-bit integer values (no need to worry about precision). The function is reasonably expensive (not outrageously so, but probably on the order of a few 100ms for a given id). The dimension of the sparse vectors should be assumed to be infinite, as new dimensions can appear over time, but in practice is unlikely to ever exceed about 20k (and individual results of f are unlikely to have more than a few hundred non-zero values). I want to support the following operations efficiently: add a new ID to the collection invalidate an existing ID retrieve sum f(id) in O(changes since last retrieval) i.e. I want to cache the sum of the vectors in a way that's reasonable to do incrementally. One option would be to support a remove ID operation and treat invalidation as a remove followed by an add. The problem with this is that it requires us to keep track of all the old values of f, which is expensive in space. I potentially need to use many instances of this sort of cached structure, so I would like to avoid that. The likely usage pattern is that new IDs are added at a fairly continuous rate and are frequently invalidated at first. Ids which have been invalidated recently are much more likely to be invalidated again than ones which have remained valid for a long time, but in principle an old Id can still be invalidated. Ideally I don't want to do this in memory (or at least I want a way that lets me save the result to disk efficiently), so an idea which lets me piggyback off an existing DB implementation of some sort would be especially appreciated.

    Read the article

  • Image does not update when file name remains the same in XCode 3.1.2

    - by vman049
    Hi Everyone, I'm using XCode version 3.1.2 and am developing for iPhone using the Simulator with iOS 2.2.1 on Leopard. I had an image file named "img.jpg" in my project which I decided to switch for a different file. After adding the new file into the XCode Resources folder, I removed the first file and renamed the new file to the same name as the previous one, "img.jpg." When I run my program, however, the Simulator loads the old image instead of the new one, even though the old one has been deleted from disk (not just the reference). I tried changing the name of the file to "img2.jpg," and it worked like it should - loading the new image, but I want to keep the name "img.jpg." I ran a search with Spotlight for "img.jpg" to see if there was another copy stored somewhere that XCode was using, but it only returned my new image file. I have tried uninstalling the app from the Simulator and running the application again, but that also does not fix the problem. What must I do for XCode to recognize that I want to use the new image file and not the old one? Thanks for your help!!

    Read the article

  • How do i prevent my code from being stolen?

    - by Calmarius
    What happens exactly when I launch a .NET exe? I know that C# is compiled to IL code and I think the generated exe file just a launcher that starts the runtime and passes the IL code to it. But how? And how complex process is it? IL code is embedded in the exe. I think it can be executed from the memory without writing it to the disk while ordinary exe's are not (ok, yes but it is very complicated). My final aim is extracting the IL code and write my own encrypted launcher to prevent scriptkiddies to open my code in Reflector and just steal all my classes easily. Well I can't prevent reverse engineering completely. If they are able to inspect the memory and catch the moment when I'm passing the pure IL to the runtime then it won't matter if it is a .net exe or not, is it? I know there are several obfuscator tools but I don't want to mess up the IL code itself. EDIT: so it seems it isn't worth trying what I wanted. They will crack it anyway... So I will look for an obfuscation tool. And yes my friends said too that it is enough to rename all symbols to a meaningless name. And reverse engineering won't be so easy after all.

    Read the article

  • 2-column table with two foreign keys. Performance/design question.

    - by Emanuel
    Hello everyone! I recently ran into a quite complex problem and after looking around a lot I couldn't find a solution to it. I've found answers to my questions many times before on stackoverflow.com, so I decided to post here. So I'm making a user/group managment system for a web-based project, and I'm storing all related data into a postgreSQL database. This system relies on three tables: USERS GROUPS GROUP_USERS The two first tables simply define all the users and all the groups on the site, and the last table, GROUP_USERS, stores the groups every user is part of. It only has two columns: USER_ID GROUP_ID Since every user can be a member of several groups, I decided to make a separate table for this purpose, rather than storing a comma separated column in the USERS-table. Now, both columns are foreign keys, and I want to make them both primary keys as well, this since each combination of USER_ID and GROUP_ID has to be unique, and if I give them the constraint UNIQUE pgAdmin tells me that each table should have at least one Primary key. But now I am stuck with what seems to be a lot of indexes and relations to a very small table only containing numbers. In the end, I want this table to be as fast as possible, even if containing tens of thousands of rows. Size on disk shouldn't be a problem since its just all numbers anyway, but it feels quite stupid to have a full-sized index refering to a smaller table. Should I stick with my current solution, store comma-separated values in a column in the USERS-table or is there any other solution I should be aware of. PS. I don't want to use an array-column, even if they are supported by postgreSQL. I want to be as generic as possible so I can switch database later on, if necessary. EDIT: I other words, will using a compound primary key and two foreign keys in one table with only two columns have a negative impact on performance rather than the opposite due to the size of the generated index? Thank you!

    Read the article

  • rake db:migrate fails when trying to do inserts

    - by anthony
    I'm trying to get a database populate so I can begin working on a project. THis project is already built and I'm being brought in to helpwith front-end work. Problem is I can't get rake db:migrate to do any inserts. Every time I run rake db:migrate I get this: ... == 20081220084043 CreateTimeDimension: migrating ============================== -- create_table(:time_dimension) - 0.0870s INSERT time_dimension(time_key, year, month, day, day_of_week, weekend, quarter) VALUES(20080101, 2008, 1, 1, 'Tuesday', false, 1) rake aborted! Could not load driver (uninitialized constant Mysql::Driver) ... I'm building on a MBP with Snow Leopard. I've installed XCode from the disk that comes with the mac. I've updated ruby, installed rails and all the needed gems. I have the 64 bit version of MySQL installed. I've tried the 32 bit version of MySQL and I've even tried installing from macports (via http://www.robbyonrails.com/articles/2010/02/08/installing-ruby-on-rails-passenger-postgresql-mysql-oh-my-zsh-on-snow-leopard-fourth-edition) The mysql gemis installed using: sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/path/to/mysql/bin/mysql_config the migrate creates the tables just fine but it dies every. single. time. it trys an insert. Any help would be great

    Read the article

  • Design for fastest page download

    - by mexxican
    I have a file with millions of URLs/IPs and have to write a program to download the pages really fast. The connection rate should be at least 6000/s and file download speed at least 2000 with avg. 15kb file size. The network bandwidth is 1 Gbps. My approach so far has been: Creating 600 socket threads with each having 60 sockets and using WSAEventSelect to wait for data to read. As soon as a file download is complete, add that memory address(of the downloaded file) to a pipeline( a simple vector ) and fire another request. When the total download is more than 50Mb among all socket threads, write all the files downloaded to the disk and free the memory. So far, this approach has been not very successful with the rate at which I could hit not shooting beyond 2900 connections/s and downloaded data rate even less. Can somebody suggest an alternative approach which could give me better stats. Also I am working windows server 2008 machine with 8 Gig of memory. Also, do we need to hack the kernel so as we could use more threads and memory. Currently I can create a max. of 1500 threads and memory usage not going beyond 2 gigs [ which technically should be much more as this is a 64-bit machine ]. And IOCP is out of question as I have no experience in that so far and have to fix this application today. Thanks Guys!

    Read the article

  • Tracking down origin of I/O in multi-process server

    - by Craig Ringer
    I'm currently trying to track down some phantom I/O in a PostgreSQL build I'm testing. It's a multi-process server and it isn't simple to associate disk I/O back to a particular back-end and query. I thought Linux's perf tool would be ideal for this, but I'm struggling to capture block I/O performance counter metrics and associate them with user-space activity. It's easy to record block I/O requests and completions with, eg: sudo perf record -g -T -u postgres -e 'block:block_rq_*' and the user-space pid is recorded, but there's no kernel or user-space stack captured, or ability to snapshot bits of the user-space process's heap (say, query text) etc. So while you have the pid, you don't know what the process was doing at that point. Just perf script output like: postgres 7462 [002] 301125.113632: block:block_rq_issue: 8,0 W 0 () 208078848 + 1024 [postgres] If I add the -g flag to perf record it'll take snapshots of the kernel stack, but doesn't capture user-space state for perf events captured in the kernel. The user-space stack only goes up to the entry-point from userspace, like LWLockRelease, LWLockAcquire, memcpy (mmap'd IO), __GI___libc_write, etc. So. Any tips? Being able to capture a snapshot of the user-space stack in response to kernel events would be ideal.

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • Ant Junit tests are running much slower via ant than via IDE - what to look at?

    - by Alex B
    I am running my junit tests via ant and they are running substantially slower than via the IDE. My ant call is: <junit fork="yes" forkmode="once" printsummary="off"> <classpath refid="test.classpath"/> <formatter type="brief" usefile="false"/> <batchtest todir="${test.results.dir}/xml"> <formatter type="xml"/> <fileset dir="src" includes="**/*Test.java" /> </batchtest> </junit> The same test that runs in near instantaneously in my IDE (0.067s) takes 4.632s when run through Ant. In the past, I've been able to speed up test problems like this by using the junit fork parameter but this doesn't seem to be helping in this case. What properties or parameters can I look at to speed up these tests? More info: I am using the reported time from the IDE vs. the time that the junit task outputs. This is not the sum total time reported at the end of the ant run. So, bizarrely, this problem has resolved itself. What could have caused this problem? The system runs on a local disk so that is not the problem.

    Read the article

  • What is common between environments within a shell terminal session?

    - by Matt1776
    I have a custom shell script that runs each time a user logs in or identity is assumed, its been placed in /etc/profile.d and performs some basic env variable operations. Recently I added some code so that if screen is running it will reattach it without needing me to type anything. There are some problems however. If I log-in as root, and su - to another user, the code runs a second time. Is there a variable I can set when the code runs the first time that will prevent a second run of the code? I thought to write something to the disk but then I dont want to prevent the code from running if I begin a new terminal session. Here is the code in question. It first attempts to reattach - if unsuccessful because its already attached (as it might be on an interruped session) it will 'take' the session back. screen -r if [ -z "$STY" ]; then exec screen -dR fi Ultimately this bug prevents me from substituting user to another user because as soon as I do so, it grabs the screen session and puts me right back where I started. Pretty frustrating

    Read the article

  • [PHP/MySQL] How to create text diff web app

    - by Adam Kiss
    Hello, idea I would like to create a little app for myself to store ideas (the thing is - I want it to do MY WAY) database I'm thinking going simple: id - unique id of revision in database text_id - identification number of text rev_id - number of revision flags - various purposes - expl. later title - self expl. desc - description text - self expl . flags - if I (i.e.) add flag rb;65, instead of storing whole text, I just said, that whenever I ask for latest revision, I go again in DB and check revision 65 Question: Is this setup the best? Is it better to store the diff, or whole text (i know, place is cheap...)? Does that revision flag make sense (wouldn't it be better to just copy text - more disk space, but less db and php processing. php I'm thinking, that I'll go with PEAR here. Although main point is to open-edit-save, possiblity to view revisions can't be that hard to program and can be life-saver in certain situations (good ideas got deleted, saving wrong version, etc...). However, I've never used PEAR in a long-time or full-project relationship, however, brief encounters in my previous experience left rather bad feeling - as I remember, it was too difficult to implement, slow and humongous to play with, so I don't know, if there's anything better. why? Although there are bazillions of various time/project/idea management tools, everything lacks something for me, whether it's sharing with users, syncing on more PCs, time-tracking, project management... And I believe, that this text diff webapp will be for internal use with various different tools later. So if you know any good and nice-UI-having project management app with support for text-heavy usage, just let me know, so I'll save my time for something better than redesigning the weel.

    Read the article

  • [PHP] Local/Dev/Live deployment - best workflow

    - by Adam Kiss
    Hello, situation We our little company with 3 people, each has a localhost webserver and most projects (previous and current) are on one PC network shared disk. We have virtual server, where some of our clients' sites and our site. Our standard workflow is: Coder PC ? Programmer localhost ? dev domain (client.company.com) ? live version (client.com) It often happens, that there are two or three guys working on same projects at the same time - one is on dev version, two are on localhost. When finished, we try to synchronize the files on dev version and ideally not to mess up any files, which *knock knock * doesn't happen often. And then one of us deploys dev version on live webserver. question we are looking for a way to simplify this workflow while updating websites - ideally some sort of diff uploader or VCS probably (Git/SVN/VCS/...), but we are not completely sure where to begin or what way would be ideal, therefore I ask you, fellow stackoverflowers for your experience with website / application deployment and recommended workflow. We probably will also need to use Mac in process, so if it won't be a problem, that would be even better. Thank you

    Read the article

  • Iteration through the HtmlDocument.All collection stops at the referenced stylesheet?

    - by Jonas
    Since "bug in .NET" is often not the real cause of a problem, I wonder if I'm missing something here. What I'm doing feels pretty simple. I'm iterating through the elements in a HtmlDocument called doc like this: System.Diagnostics.Debug.WriteLine("*** " + doc.Url + " ***"); foreach (HtmlElement field in doc.All) System.Diagnostics.Debug.WriteLine(string.Format("Tag = {0}, ID = {1} ", field.TagName, field.Id)); I then discovered the debug window output was this: Tag = !, ID = Tag = HTML, ID = Tag = HEAD, ID = Tag = TITLE, ID = Tag = LINK, ID = ... when the actual HTML document looks like this: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <html> <head> <title>Protocol</title> <link rel="Stylesheet" type="text/css" media="all" href="ProtocolStyle.css"> </head> <body onselectstart="return false"> <table> <!-- Misc. table elements and cell values --> </table> </body> </html> Commenting out the LINK tag solves the issue for me, and the document is completely parsed. The ProtocolStyle.css file exist on disk and is loaded properly, if that would matter. Is this a bug in .NET 3.5 SP1, or what? For being such a web-oriented framework, I find it hard to believe there would be such a major bug in it.

    Read the article

  • (closed) nodejs responding an image (via pipe and response.end()) leads into strange behaviour

    - by johannesboyne
    What the Problem was: I had 2 different writeStreams! If WriteStram#1 is closed, the second should be closed too and then it all should be piped... BUT node is asynchronious so while one has been closed, the other one hasn't. Even the stream.end() was called... well you always should wait for the close event! thx guys for your help! I am at my wit's end. I used this code to pipe an image to my clients: req.pipe(fs.createReadStream(__dirname+'/imgen/cached_images/' + link).pipe(res)) it does work but sometimes the image is not transferred completely. But no error is thrown neither on client side (browser) nor on server side (node.js). My second try was var img = fs.readFileSync(__dirname+'/imgen/cached_images/' + link); res.writeHead(200, { 'Content-Type' : 'image/png' }); res.end(img, 'binary'); but it leads to the same strange behaviour... Does anyone got a clue for me? Regards! (abstracted code...) var http = require('http'); http.createServer(function (req, res) { Imgen.generateNew( 'virtualtwins/www_leonardocampus_de/overview/28', 'www.leonardocampus.de', 'overview', '28', null, [], [], function (link) { fs.stat(__dirname+'/imgen/cached_images/' + link, function(err, file_info) { if (err) { console.log('err', err); } console.log('file info', file_info.size); res.writeHead(200, 'image/png'); fs.createReadStream(__dirname+'/imgen/cached_images/' + link).pipe(res); }); } ); }).listen(13337, '127.0.0.1'); Imgen.generateNew just creates a new file, saves it to the disk and gives back the path (link)

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • Where to create/keep secret files for license information/trials on Windows/Mac OS X/Linux?

    - by BastiBense
    I'm writing a commercial product which uses a simple registration mechanism and allows the user to use the application for a demo period before purchasing. My application must somewhere store the registration information (if entered) and/or the date of the first launch to calculate if the user is still within the demo/trail period. While I'm pretty much finished with the registration mechanism itself, I now have to find a good way to store the registration information on the user's disk. The most obvious idea would be to store the trial period in the preferences file, but since user tend to delete/tinker with those from time to time, it might be a good idea to keep the registration information in a separate, more hidden file. So here's my question: What is the best place/strategy to keep and create such hidden files on Windows, Mac OS X and Linux? Here is what came to my mind so far: Linux/Mac OS X Most Unix-like systems are rather locked down when it comes to places a user can write files to. In most cases this is only the /tmp directory and the user's home directory. I guess the easiest here is probably to create a file with a dot-prefix to make it less visible, then give it a name that won't make it obvious that it's associated with my application. Windows Probably much like Linux/Mac OS X - more recent Windows versions become more restrictive when it comes to file system permissions. Anyway, I'd like to hear your ideas and thoughs. Even better if you have already implemented something similar in the past. Thanks! Update For me the places for such files is more relevant than the discussion of the question if this way for copy protection is good or bad.

    Read the article

  • How can I simplify this user interface?

    - by Bears will eat you
    I'm writing an internal-tools webapp; one of the central pages in this tool has a whole bunch of related commands the user can execute by clicking one of a number of buttons on the page, like this: Ideally, all of the buttons would fit on one line. Ordinarily I'd do this by changing each widget from a button with a (sometimes long) text label to a simple, compact icon - e.g. could be replaced by a familiar disk icon: Unfortunately, I don't think I can do this for every button on this particular page. Some of the command buttons just don't have good visual analogs - "VDS List". Or, if I needed to add another button in the future for some other kind of list, I'd need two icons that both communicate "list-ness" and which list. So, I'm still considering this option, but I don't love it. So it's come time for me to add yet another button to this section (don't you love internal tools?). There's not enough room on that single line to fit the new button. Aside from the icon solution I already mentioned, what would be a good* way to simplify/declutter/reduce or otherwise improve this UI? *As per Jakob Nielsen's article, I'd like to think that a dropdown menu is not the solution.

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • Batch Inserts And Prepared Query Error

    - by ircmaxell
    Ok, so I need to populate a MS Access database table with results from a MySQL query. That's not hard at all. I've got the program written to where it copies a template .mdb file to a temp name and opens it via odbc. No problem so far. I've noticed that Access does not support batch inserting (VALUES (foo, bar), (second, query), (third query)). So that means I need to execute one query per row (there are potentially hundreds of thousands of rows). Initial performance tests show a rate of around 900 inserts/sec into Access. With our largest data sets, that could mean execution times of minutes (Which isn't the end of the world, but obviously the faster the better). So, I tried testing a prepared statement. But I keep getting an error (Warning: odbc_execute() [function.odbc-execute]: SQL error: [Microsoft][ODBC Microsoft Access Driver]COUNT field incorrect , SQL state 07001 in SQLExecute in D:\....php on line 30). Here's the code I'm using (Line 30 is odbc_execute): $sql = 'INSERT INTO table ([field0], [field1], [field2], [field3], [field4], [field5]) VALUES (?, ?, ?, ?, ?, ?)'; $stmt = odbc_prepare($conn, $sql); for ($i = 200001; $i < 300001; $i++) { $a = array($i, "Field1 $", "Field2 $i", "Field3 $i", "Field4 $i", $i); odbc_execute($stmt, $a); } So my question is two fold. First, is there any idea on why I'm getting that error (I've checked, and the number in the array matches the field list which matches the number of parameter ? markers)? And second, should I even bother with this or just use the straight INSERT statements? Like I said, time isn't critical, but if it's possible, I'd like to get that time as low as possible (Then again, I may be limited by disk throughput, since 900 operations/sec is high already)... Thanks

    Read the article

  • What is the best way to properly test object equality against an array of objects?

    - by radesix
    My objective is to abort the NSXMLParser when I parse an item that already exists in cache. The basic flow of the program works like this: 1) Program starts and downloads an XML feed. Each item in the feed is represented by a custom object (FeedItem). Each FeedItem gets added to an array. 2) When the parsing is complete the contents of the array (all FeedItem objects) are archived to the disk. The next time the program is executed or the feed is refreshed by the user I begin parsing again; however, since a cache (array) now exists as each item is parsed I want to see if the object exists in the cache. If it does then I know I have downloaded all the new items and no longer need to continue parsing. What I am learning, I think, is that I can't use indexOfObject or indexOfObjectIDenticalTo: because these really seem to be checking to see that the objects are using the same memory address (thus identical). What I want to do is see if the contents of the object are equal (or at least some of the contents). I've done some research and found that I can override the IsEqual method; however, I really don't want to iterate/enumerate through the entire cache contents table for every newly parsed XML FeedItem. Is iterating through the collection and testing each one for equality the only way to do this or is there a better technique I am not aware of? Currently I am using the following code though I know it needs to change: NSUInteger index = [self.feedListCache.feedList indexOfObject:self.currentFeedItem]; if (index == NSNotFound) { }

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >