Search Results

Search found 2240 results on 90 pages for 'tracking'.

Page 18/90 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Tracking unique versions of files with hashes

    - by rwmnau
    I'm going to be tracking different versions of potentially millions of different files, and my intent is to hash them to determine I've already seen that particular version of the file. Currently, I'm only using MD5 (the product is still in development, so it's never dealt with millions of files yet), which is clearly not long enough to avoid collisions. However, here's my question - Am I more likely to avoid collisions if I hash the file using two different methods and store both hashes (say, SHA1 and MD5), or if I pick a single, longer hash (like SHA256) and rely on that alone? I know option 1 has 288 hash bits and option 2 has only 256, but assume my two choices are the same total hash length. Since I'm dealing with potentially millions of files (and multiple versions of those files over time), I'd like to do what I can to avoid collisions. However, CPU time isn't (completely) free, so I'm interested in how the community feels about the tradeoff - is adding more bits to my hash proportionally more expensive to compute, and are there any advantages to multiple different hashes as opposed to a single, longer hash, given an equal number of bits in both solutions?

    Read the article

  • Tracking object entries when "playing" a Windows Enhanced Metafile

    - by lzcd
    One of my current projects requires that I work out what colours are being used in an EMF file. I have been able to successfully whip up a file parser in C# that notes all references to colours... but haven't had any luck tracking which objects are in use across the entire file so I can apart colours that are referenced from colours that are used to paint on screen. The older style WMF files are easy as the object library starts at zero and one can simply track each "Create Object" style command... but EMF files are proving to be trickier as there seems to be preexisting entries in the library (if the "Select Object" commands I'm seeing are to be believed). Would anyone be able to either enlighten me on how to track objects in the library correctly with EMF files... or suggest an easier alternative to work out which colours are actually being used in the file (as opposed to just being defined)?

    Read the article

  • Tracking Mouse Stop

    - by Hallik
    I am trying to track mouse movements in the browser, and if a user stops their mouse for 0.5 seconds, to execute some code. I have put a breakpoint in the code below in firebug, and it breaks on the var mousestop = function(evt) line, but then jumps to the return statement. Am I missing something simple? Why isn't it executing the POST statement? I am tracking mouse clicks in a similar way, and it posts to the server just fine. Just not mouse stops. $.fn.saveStops = function() { $(this).bind('mousemove.clickmap', function(evt) { var mousestop = function(evt) { $.post('/heat-save.php', { x:evt.pageX, y:evt.pageY, click:"false", w:window.innerWidth, h:window.innerHeight, l:escape(document.location.pathname) }); }, thread; return function() { clearTimeout(thread); thread = setTimeout(mousestop, 500); }; }); };

    Read the article

  • Google Analytics - async tracking with two accounts

    - by MatW
    I'm currently testing GAs new async code snippet using two different tracking codes on the same page; _gaq.push( ['_setAccount', 'UA-XXXXXXXX-1'], ['_trackPageview'], ['b._setAccount', 'UA-XXXXXXXX-2'], ['b._trackPageview'] ); Although both codes work, I've noticed that they present inconsistent results. Now, we aren't talking huge differences here, only 1 or 2 visits / day every now and then. However, this site is tiny and 1 or 2 visits equates to a 15% difference in figures. Now, the final site has much more traffic, but my concerns are; will this inconsistancy scale with traffic? assuming not, is a slight variation in recorded stats an accepted norm?

    Read the article

  • Holiday Approval /tracking

    - by nav
    Hi, Has anyone implemented a holiday workflow approval / tracking list in MOSS Sharepoint 2007? Can anyone suggests other solutions? The solution below works fine but I am specifically looking for a way to lookup manager of the user who created the holiday request list item in the workflow. I have followed this link http://www.u2u.info/Blogs/Kevin/Lists/Posts/Post.aspx?ID=39 which shows you how to create a custom workflow approval. Below are the steps outlined by the link. User add new holiday item to list Workflow kicks off Wf has the manager hardcoded (need a way to look this up, maybe from AD??) and creates a Task for them to review the request. If desired, this can include an email notification of the task Manager reviews, adds comments and approves/denies request User is notified of completed request Many Thanks, Naveen

    Read the article

  • Question about tracking user in a map application using cellid

    - by subh
    I am trying to understand the concept of cellid (http://www.opencellid.org/api) As per that, if we send a request http://www.opencellid.org/cell/get?key=myapikey&mnc=1&mcc=2&lac=200&cellid=234 it will respond with the latitude and longitude. I was wondering if this can be used from within a google map application for tracking a user or it needs to be used from within a mobile device? If it can be used from within a web app, what parameters should it use for mcc: mobile country code (decimal) mnc: mobile network code (decimal) lac: locale area code (decimal) cellid: value of the cell id E.g., will it work if we know the cell number of the person(e.g., 281 222 6700)

    Read the article

  • discrepancy in google analytics pageview totals when tracking subdomains

    - by frabjousB
    We are using the old urchin.js and are tracking 2 subdomains under the same profile. We have a “track subdomains” advanced filter defined (as per http://www.google.com/support/googleanalytics/bin/answer.py?hl=en&answer=55524) as well as 2 segments for presenting data in the reports: hostname matches exactly subdomain1.domain-name.com and hostname matches exactly subdomain2.domain-name.com When I apply these segments to our Top Content Overview report, the All Visits total for PageViews does not correspond to the # of visits reported for each subdomain. For example: All Visits = 53 subdomain1 = 24 subdomain2 = 32 Is there any reason as to why we would be seeing this discrepancy in numbers?

    Read the article

  • Lightbeam : Mozilla sort une extension permettant de savoir qui vous piste sur Internet et de suivre en temps réel le « tracking » de vos opérations

    Lightbeam : Mozilla sort une extension permettant de savoir qui vous piste sur Internet et de suivre en temps réel le « tracking » de vos opérationsLes internautes sont de plus en plus inquiets pour la sécurité de leur vie privée sur Internet. Les efforts des éditeurs de navigateurs et du W3C pour mettre sur pied une norme (le projet Do-Not-Track) pour permettre aux internautes d'autoriser ou non le « tracking » de leur activité sur le Web évolue lentement, avec d'un coté les annonceurs qui menacent...

    Read the article

  • Nagios vs Splunk

    - by dan_vitch
    I am looking to implement log tracking at my current company. After some research it seems Nagios and Splunk are the two best options. I was wondering if there is a consensus with which is better. I understand that Splunk can be quite pricey if the non-free version is used. That being said I can imagine the answer to my question will be "If you have the money use Splunk, if not use Nagios"

    Read the article

  • Get time-sheet report from JIRA

    - by John
    I have enabled time-tracking on JIRA, developers are logging time spent. But I can't find a way to get a report on time spent, per-user, over a given period. It saves me asking them to separately send me timesheets to check. Is it possible? If so where do I look?

    Read the article

  • Get time-sheet report from JIRA

    - by John
    I have enabled time-tracking on JIRA, developers are logging time spent. But I can't find a way to get a report on time spent, per-user, over a given period. It saves me asking them to separately send me timesheets to check. Is it possible? If so where do I look?

    Read the article

  • What is your experience of Devtrack?

    - by Luke H
    This question covers bug tracking software in general, but I'm interested to find out more detail specifically about Devtrack. If you have first-hand experience of using it, I'd love to hear about it. How would you compare it to other bug tracking systems you know, what do you feel is good and bad about it, and why?

    Read the article

  • LINQ to SQL - Tracking New / Dirty Objects

    - by Joseph Sturtevant
    Is there a way to determine if a LINQ object has not yet been inserted in the database (new) or has been changed since the last update (dirty)? I plan on binding my UI to LINQ objects (using WPF) and need it to behave differently depending whether or not the object is already in the database. MyDataContext context = new MyDataContext(); MyObject obj; if (new Random().NextDouble() > .5) obj = new MyObject(); else obj = context.MyObjects.First(); // How can I distinguish these two cases? The only simple solution I can think of is to set the primary key of new records to a negative value (my PKs are an identity field and will therefore be set to a positive integer on INSERT). This will only work for detecting new records. It also requires identity PKs, and requires control of the code creating the new object. Is there a better way to do this? It seems like LINQ must be internally tracking the status of these objects so that it can know what to do on context.SubmitChanges(). Is there some way to access that "object status"? Clarification Apparently my initial question was confusing. I'm not looking for a way to insert or update records. I'm looking for a way, given any LINQ object, to determine if that object has not been inserted (new) or has been changed since its last update (dirty).

    Read the article

  • Google Analytics Event Tracking and Variable visibility.

    - by Jeow
    Hi guys, I have added to my html page the standard latest snippet to get google analytics to work: ... ... var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-15080849-1']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = 'http://www.google-analytics.com/ga.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(ga); })(); Now looking at the official 'event tracking guide' google says: add a snippet such as: pageTracker._trackEvent('Videos', 'Play', 'Gone With the Wind'); my question is: where is pageTracker coming from ? is it a global object in ga.js ? but if it is, why google did not tell me that they run a risk on breaking some script... I must be missing something any help really appreciated.

    Read the article

  • git-svn: reset tracking for master

    - by digitala
    I'm using git-svn to work with an SVN repository. My working copies have been created using git svn clone -s http://foo.bar/myproject so that my working copy follows the default directory scheme for SVN (trunk, tags, branches). Recently I've been working on a branch which was created using git-svn branch myremotebranch and checked-out using git checkout --track -b mybranch myremotebranch. I needed to work from multiple locations, so from the branch I git-svn dcommit-ed files to the SVN repository quite regularly. After finishing my changes, I switched back to the master and executed a merge, committed the merge, and tried to dcommit the successful merge to the remote trunk. It seems as though after the merge the remote tracking for the master has switched to the branch I was working on: # git checkout master # git merge mybranch ... (successful) # git add . # git commit -m '...' # git svn dcommit Committing to http://foo.bar/myproject/branches/myremotebranch ... # Is there a way I can update the master so that it's following remotes/trunk as before the merge? I'm using git 1.7.0.5, if that's any help.

    Read the article

  • "variable tracking" is eating my compile time!

    - by wowus
    I have an auto-generated file which looks something like this... static void do_SomeFunc1(void* parameter) { // Do stuff. } // Continues on for another 4000 functions... void dispatch(int id, void* parameter) { switch(id) { case ::SomeClass1::id: return do_SomeFunc1(parameter); case ::SomeClass2::id: return do_SomeFunc2(parameter); // This continues for the next 4000 cases... } } When I build it like this, the build time is enormous. If I inline all the functions automagically into their respective cases using my script, the build time is cut in half. GCC 4.5.0 says ~50% of the build time is being taken up by "variable tracking" when I use -ftime-report. What does this mean and how can I speed compilation while still maintaining the superior cache locality of pulling out the functions from the switch? EDIT: Interestingly enough, the build time has exploded only on debug builds, as per the following profiling information of the whole project (which isn't just the file in question, but still a good metric; the file in question takes the most time to build): Debug: 8 minutes 50 seconds Release: 4 minutes, 25 seconds

    Read the article

  • javascript accordion - tracking time question

    - by JohnMerlino
    Hey all, I was reading up on this javascript tutorial: http://www.switchonthecode.com/tutor...ccordion-menus Basically, it shows you how to create an accordion using pure javascript, not jquery. All made sense to me until the actual part of tracking the animation. He says "Because of all that, the first thing we do in the animation function is figure out how much time has passed since the last animation iteration." And then uses this code: Code: var elapsedTicks = curTick - lastTick; lastTick is equal to the value of when the function was called (Date().getTime()) and curTick is equal to the value when the function was received. I don't understand why we are subtracting one from the other right here. I can't imagine that there's any noticeable time difference between these two values. Or maybe I'm missing something. Is that animate() function only called once every time a menu title is clicked or is it called several times to create the incremental animation effect? setTimeout("animate(" + new Date().getTime() + "," + TimeToSlide + ",'" + openAccordion + "','" + nID + "')", 33); Thanks for any response.

    Read the article

  • Vacancy Tracking Algorithm implementation in C++

    - by Dave
    I'm trying to use the vacancy tracking algorithm to perform transposition of multidimensional arrays in C++. The arrays come as void pointers so I'm using address manipulation to perform the copies. Basically, there is an algorithm that starts with an offset and works its way through the whole 1-d representation of the array like swiss cheese, knocking out other offsets until it gets back to the original one. Then, you have to start at the next, untouched offset and do it again. You repeat until all offsets have been touched. Right now, I'm using a std::set to just fill up all possible offsets (0 up to the multiplicative fold of the dimensions of the array). Then, as I go through the algorithm, I erase from the set. I figure this would be fastest because I need to randomly access offsets in the tree/set and delete them. Then I need to quickly find the next untouched/undeleted offset. First of all, filling up the set is very slow and it seems like there must be a better way. It's individually calling new[] for every insert. So if I have 5 million offsets, there's 5 million news, plus re-balancing the tree constantly which as you know is not fast for a pre-sorted list. Second, deleting is slow as well. Third, assuming 4-byte data types like int and float, I'm using up actually the same amount of memory as the array itself to store this list of untouched offsets. Fourth, determining if there are any untouched offsets and getting one of them is fast -- a good thing. Does anyone have suggestions for any of these issues?

    Read the article

  • iPhone: Tracking/Identifying individual touches

    - by FlorianZ
    I have a quick question regarding tracking touches on the iPhone and I seem to not be able to come to a conclusion on this, so any suggestions / ideas are greatly appreciated: I want to be able to track and identify touches on the iphone, ie. basically every touch has a starting position and a current/moved position. Touches are stored in a std::vector and they shall be removed from the container, once they ended. Their position shall be updated once they move, but I still want to keep track of where they initially started (gesture recognition). I am getting the touches from [event allTouches], thing is, the NSSet is unsorted and I seem not to be able to identify the touches that are already stored in the std::vector and refer to the touches in the NSSet (so I know which ones ended and shall be removed, or have been moved, etc.) Here is my code, which works perfectly with only one finger on the touch screen, of course, but with more than one, I do get unpredictable results... - (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event { [self handleTouches:[event allTouches]]; } - (void) touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event { [self handleTouches:[event allTouches]]; } - (void) touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event { [self handleTouches:[event allTouches]]; } - (void) touchesCancelled:(NSSet*)touches withEvent:(UIEvent*)event { [self handleTouches:[event allTouches]]; } - (void) handleTouches:(NSSet*)allTouches { for(int i = 0; i < (int)[allTouches count]; ++i) { UITouch* touch = [[allTouches allObjects] objectAtIndex:i]; NSTimeInterval timestamp = [touch timestamp]; CGPoint currentLocation = [touch locationInView:self]; CGPoint previousLocation = [touch previousLocationInView:self]; if([touch phase] == UITouchPhaseBegan) { Finger finger; finger.start.x = currentLocation.x; finger.start.y = currentLocation.y; finger.end = finger.start; finger.hasMoved = false; finger.hasEnded = false; touchScreen->AddFinger(finger); } else if([touch phase] == UITouchPhaseEnded || [touch phase] == UITouchPhaseCancelled) { Finger& finger = touchScreen->GetFingerHandle(i); finger.hasEnded = true; } else if([touch phase] == UITouchPhaseMoved) { Finger& finger = touchScreen->GetFingerHandle(i); finger.end.x = currentLocation.x; finger.end.y = currentLocation.y; finger.hasMoved = true; } } touchScreen->RemoveEnded(); } Thanks!

    Read the article

  • EF4 + STE: Reattaching via a WCF Service? Using a new objectcontext each and every time?

    - by Martin
    Hi there, I am planning to use WCF (not ria) in conjunction with Entity Framework 4 and STE (Self tracking entitites). If i understnad this correctly my WCF should return an entity or collection of entities (using LIST for example and not IQueryable) to the client (in my case silverlight) The client then can change the entity or update it. At this point i believe it is self tracking???? This is where i sort of get a bit confused as there are a lot of reported problems with STEs not tracking.. Anyway... Then to update i just need to send back the entity to my WCF service on another method to do the update. I should be creating a new OBJECTCONTEXT everytime? In every method? If i am creaitng a new objectcontext everytime in everymethod on my WCF then don't i need to re-attach the STE to the objectcontext? So basically this alone wouldn't work?? using(var ctx = new MyContext()) { ctx.Orders.ApplyChanges(order); ctx.SaveChanges(); } Or should i be creating the object context once in the constructor of the WCF service so that 1 call and every additional call using the same wcf instance uses the same objectcontext? I could create and destroy the wcf service in each method call from the client - hence creating in effect a new objectcontext each time. I understand that it isn't a good idea to keep the objectcontext alive for very long. Any insight or information would be gratefully appreciated thanks

    Read the article

  • What is AssetCache and AFCache?

    - by gentmatt
    I'm currently investigating the different locations where the flashplayer in OSX stores its files. The reason is protecting privacy. I've found that Chrome and Firefox both read/write to the following directories: ~/Library/Caches/Adobe/Flash Player/AFCache ~/Library/Caches/Adobe/Flash Player/AssetCache ~/Library/Preferences/Macromedia/Flash Player/#SharedObjects ~/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys The last two directories are locations where Firefox stores LSO cookies for long time tracking. You can manually delete them yourself or do this automatically using an extension such as BetterPrivacy for Firefox. However, I have no clue to what the AFCache and AssetCache are for. I assume that you should not delete them as cache generally improves the browsing experience, but I'd really like to know what is stored there? I've been searching the Internet quite a bit now, but there does not seem to be much documentation.

    Read the article

  • Logging Bounced messages to a Database (Postfix with virtual domains/users)

    - by Gurunandan
    We have a postfix installation with a couple of virtual domains each with virtual users. These domains and users are mapped using a mysql database. I have been until now tracking bounces by parsing the postfix log file. I suspect there must be better and more efficient ways of doing this. I thought of three but I am not sure what is best: Write a Postfix content filter that logs the bounce and throws away the mail Use procmail - but I am not sure how procmail would work with virtual users who have no $HOME defined Write a script that POPs mail from mailboxes; parses and logs them and deletes the bounced email I would appreciate advise on which would be best from a maintenance point of view and efficient from conserving server resources point of view. Thanks

    Read the article

  • IT Asset Management

    - by CogitoErgoSum
    Our company has grown quite quickly and I am facing new tasks which I did not think I'd need to deal with. Recently we've come ot a point where we have 100+ Devices (Routers, Bridges, Computers, Laptops, VOIP Phones etc). The other day I was quite frightened when I asked for an inventory and no one had one. I want to start tagging all equipment and recording serials to begin tracking our inventory and ensuring we have a proper record of what equipment we have. Does anyone have advice as to how to go about 1. Convincing the higher ups why we need to do this and 2. What software or strategies might work? Keep in mind this is not for furniture, office equipment etc but IT specific equipment. I'm concerned over people 1. Stealing the physical devices and 2. Losing track of configuration data etc in case we'd need to do a wipe and restore

    Read the article

  • web based source control management software [closed]

    - by tom smith
    hi. not sure if this is the right place, but hopefully someone might have thoughts on a solution/vendor. Starting to spec out a project that will require multiple (50-100) developers to be able to manipulate source files/scripts for a large scale project. The idea is to be able to have each app go through a dev/review/test process, where the users can select (or be assigned) the role they're going to have for the given app. I'm looking for web-based, version control, issue tracking, user roles/access, workflow functionality, etc... Ideally, the process will also allow for the reviewed/valid app to then be exported to a separate system for testing on the test server/environment. This can be hosted on our servers, or we can do the colo process. I've checked out Alassian/Collabnet, but any thoughts you can provide would me appreciated as well. thanks

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >