Search Results

Search found 10098 results on 404 pages for 'per pixel'.

Page 99/404 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • iPhone - how to be notified of call completion

    - by ecodan
    Hi- I'm developing an application that needs to take action on completed phone calls, preferably right after the call ends but minimally once per day. I've read up on the new CoreTelphony framework, and it seems I can get call events if my app is active, but I don't see how to launch/wake my app when a call ends if my app is not the foreground app. I also don't see how any of the new pseudo-background "modes" would allow my app to listen for these events in the background. Do any of you know how this might be done? If post-call processing isn't possible, then I'd like to figure out a way to automatically wake my app up once per day, pull all of the call events since the last wakeup, and process them. I know how I might do this with Push or Local notifications, but my understanding is that those require user action to continue; in this case, I just want the processing to happen automatically. Is there a mechanism that would enable this? Thanks, Dan

    Read the article

  • Image compatibility in iphone and android

    - by damodar
    I developed UI for iphone apps and now want to use the same UI in Android apps. I read that Android use dip for image resolution and i also read that 1 dip=1.5 pixel.I simply multiply the image size by 1.5px. Now the problem is that the image is blur and not as clear as in iphone apps.So will some body suggest me how should i make a design so that it could be used in iphone and android.

    Read the article

  • Working with images in Scala

    - by dbyrne
    I am generating large PNG files from a Scala program. Currently, I am doing it the same way I would do it in java. I am creating a new BufferedImage and setting each pixel to the correct color. This works fine, but I am wondering if there are any good libraries for working with images in Scala? I am looking for something like Ruby's RMagick library.

    Read the article

  • Editing an UIImage

    - by wwrob
    I have an UIImage that I want to edit (say, make every second row of pixels black). Now I am aware of the functions that extract PNG or JPEG data from the image, but that's raw data and I have no idea how the png/jpeg files work. Is there a way I can extract the colour data from each pixel into an array? And then make a new UIImage using the data from the array?

    Read the article

  • CouchDB Versioning / Auditing

    - by Cory
    I'm attempting to use CouchDB for a system that requires full auditing of all data operations. Because of its built in revision-tracking, couch seemed like an ideal choice. But then I read in the O'Reilly textbook that "CouchDB does not guarantee that older versions are kept around." I can't seem to find much more documentation on this point, or how couch deals with its revision-tracking internally. Is there any way to configure couch either on a per-database, or per-document level to keep all versions around forever? If so, how?

    Read the article

  • Odd optimization problem under MSVC

    - by Goz
    I've seen this blog: http://igoro.com/archive/gallery-of-processor-cache-effects/ The "weirdness" in part 7 is what caught my interest. My first thought was "Thats just C# being weird". Its not I wrote the following C++ code. volatile int* p = (volatile int*)_aligned_malloc( sizeof( int ) * 8, 64 ); memset( (void*)p, 0, sizeof( int ) * 8 ); double dStart = t.GetTime(); for (int i = 0; i < 200000000; i++) { //p[0]++;p[1]++;p[2]++;p[3]++; // Option 1 //p[0]++;p[2]++;p[4]++;p[6]++; // Option 2 p[0]++;p[2]++; // Option 3 } double dTime = t.GetTime() - dStart; The timing I get on my 2.4 Ghz Core 2 Quad go as follows: Option 1 = ~8 cycles per loop. Option 2 = ~4 cycles per loop. Option 3 = ~6 cycles per loop. Now This is confusing. My reasoning behind the difference comes down to the cache write latency (3 cycles) on my chip and an assumption that the cache has a 128-bit write port (This is pure guess work on my part). On that basis in Option 1: It will increment p[0] (1 cycle) then increment p[2] (1 cycle) then it has to wait 1 cycle (for cache) then p[1] (1 cycle) then wait 1 cycle (for cache) then p[3] (1 cycle). Finally 2 cycles for increment and jump (Though its usually implemented as decrement and jump). This gives a total of 8 cycles. In Option 2: It can increment p[0] and p[4] in one cycle then increment p[2] and p[6] in another cycle. Then 2 cycles for subtract and jump. No waits needed on cache. Total 4 cycles. In option 3: It can increment p[0] then has to wait 2 cycles then increment p[2] then subtract and jump. The problem is if you set case 3 to increment p[0] and p[4] it STILL takes 6 cycles (which kinda blows my 128-bit read/write port out of the water). So ... can anyone tell me what the hell is going on here? Why DOES case 3 take longer? Also I'd love to know what I've got wrong in my thinking above, as i obviously have something wrong! Any ideas would be much appreciated! :) It'd also be interesting to see how GCC or any other compiler copes with it as well! Edit: Jerry Coffin's idea gave me some thoughts. I've done some more tests (on a different machine so forgive the change in timings) with and without nops and with different counts of nops case 2 - 0.46 00401ABD jne (401AB0h) 0 nops - 0.68 00401AB7 jne (401AB0h) 1 nop - 0.61 00401AB8 jne (401AB0h) 2 nops - 0.636 00401AB9 jne (401AB0h) 3 nops - 0.632 00401ABA jne (401AB0h) 4 nops - 0.66 00401ABB jne (401AB0h) 5 nops - 0.52 00401ABC jne (401AB0h) 6 nops - 0.46 00401ABD jne (401AB0h) 7 nops - 0.46 00401ABE jne (401AB0h) 8 nops - 0.46 00401ABF jne (401AB0h) 9 nops - 0.55 00401AC0 jne (401AB0h) I've included the jump statetements so you can see that the source and destination are in one cache line. You can also see that we start to get a difference when we are 13 bytes or more apart. Until we hit 16 ... then it all goes wrong. So Jerry isn't right (though his suggestion DOES help a bit), however something IS going on. I'm more and more intrigued to try and figure out what it is now. It does appear to be more some sort of memory alignment oddity rather than some sort of instruction throughput oddity. Anyone want to explain this for an inquisitive mind? :D Edit 3: Interjay has a point on the unrolling that blows the previous edit out of the water. With an unrolled loop the performance does not improve. You need to add a nop in to make the gap between jump source and destination the same as for my good nop count above. Performance still sucks. Its interesting that I need 6 nops to improve performance though. I wonder how many nops the processor can issue per cycle? If its 3 then that account for the cache write latency ... But, if thats it, why is the latency occurring? Curiouser and curiouser ...

    Read the article

  • Tool for response time analysis on JBoss server?

    - by Ariel Vardi
    I am running a pretty high traffic cluster of JBoss servers serving REST requests and I am interested in tools reading the access logs in Tomcat format (with %D parameter) to provide a detailed analysis of the response time on a per-call basis. Ideally this tool would generate a chart showing the progression of the response time throughout the day, hour per hour, then a weekly view with averages on the day, and monthly with average on the weeks (CACTI style). I've looked for such tools and couldn't find anything. Is any of you guys aware of something close to that before I start writing my own? I haven't looked into CACTI extensions yet, but that be an option?

    Read the article

  • Blit Queue Optimization Algorithm

    - by martona
    I'm looking to implement a module that manages a blit queue. There's a single surface, and portions of this surface (bounded by rectangles) are copied to elsewhere within the surface: add_blt(rect src, point dst); There can be any number of operations posted, in order, to the queue. Eventually the user of the queue will stop posting blits, and ask for an optimal set of operations to actually perform on the surface. The task of the module is to ensure that no pixel is copied unnecessarily. This gets tricky because of overlaps of course. A blit could re-blit a previously copied pixel. Ideally blit operations would be subdivided in the optimization phase in such a way that every block goes to its final place with a single operation. It's tricky but not impossible to put this together. I'm just trying to not reinvent the wheel. I looked around on the 'net, and the only thing I found was the SDL_BlitPool Library which assumes that the source surface differs from the destination. It also does a lot of grunt work, seemingly unnecessarily: regions and similar building blocks are a given. I'm looking for something higher-level. Of course, I'm not going to look a gift horse in the mouth, and I also don't mind doing actual work... If someone can come forward with a basic idea that makes this problem seem less complex than it does right now, that'd be awesome too. EDIT: Thinking about aaronasterling's answer... could this work? Implement customized region handler code that can maintain metadata for every rectangle it contains. When the region handler splits up a rectangle, it will automatically associate the metadata of this rectangle with the resulting sub-rectangles. When the optimization run starts, create an empty region handled by the above customized code, call this the master region Iterate through the blt queue, and for every entry: Let srcrect be the source rectangle for the blt beng examined Get the intersection of srcrect and master region into temp region Remove temp region from master region, so master region no longer covers temp region Promote srcrect to a region (srcrgn) and subtract temp region from it Offset temp region and srcrgn with the vector of the current blt: their union will cover the destination area of the current blt Add to master region all rects in temp region, retaining the original source metadata (step one of adding the current blt to the master region) Add to master region all rects in srcrgn, adding the source information for the current blt (step two of adding the current blt to the master region) Optimize master region by checking if adjacent sub-rectangles that are merge candidates have the same metadata. Two sub-rectangles are merge candidates if (r1.x1 == r2.x1 && r1.x2 == r2.x2) | (r1.y1 == r2.y1 && r1.y2 == r2.y2). If yes, combine them. Enumerate master region's sub-rectangles. Every rectangle returned is an optimized blt operation destination. The associated metadata is the blt operation`s source.

    Read the article

  • mod_perl memory leak

    - by marghi
    Hello, I recently discovered that one of our sites has a memory leak in it, it's very strange because it happened all of the sudden. I've used GTop to measure the memory size per process and it tells me that the real value is somewhere around 65 MB (on the server) per request and and additional 5 MB shared. I tried preloading the modules in the startup.pl file a indicated in the performance tuning article for mod_perl. Nothing happened if fact the shared memory decreased down to 3.7 MB, in this situation I thought that my application is leaking memory do before any line of code got executed I measured the memory just to find out that the total value is in fact 64 MB, my questions are: Is there a default preallocation of memory for each process? Is there a configuration issue? Is mod_perl leaking memory ? Thank you very much.

    Read the article

  • Trying to convert a 2D image into 3D objects in Java

    - by Kyle
    Hey, I'm trying to take a simple image, something like a black background with colored blocks representing walls. I'm trying to figure out how to go about starting on something like this. Do I need to parse the image and look at each pixel or is there an easier way to do it? I'm using Java3D but it doesn't seem to have any sort of built in support for that...

    Read the article

  • Capture bitmap of window in isolation (Windows OS)?

    - by Jake Petroules
    I know there's a way to capture a bitmap of a window in Windows without anything that may be obscuring it (e.g. you have a window with a dialog in front, but the dialog is not captured as it would be if you did a simple pixel grab), but I can't remember how to do so. Also I believe this is only possible in Windows Vista and above, with the introduction of the compositing window manager?

    Read the article

  • How does your team work together in a remote setup?

    - by Carl Rosenberger
    Hi, we are a distributed team working on the object database db4o. The way we work: We try to program in pairs only. We use Skype and VNC or SharedView to connect and work together. In our online Tuesday meeting every week (usually about 1 hour) we talk about the tasks done last week we create new pairs for the next week with a random generator so knowledge and friendship distribute evenly we set the priority for any new tasks or bugs that have come in each team picks the tasks it likes to do from the highest prioritized ones. From Tuesday to Wednesday we estimate tasks. We have a unit of work we call "Ideal Developer Session" (IDS), maybe 2 or 3 hours of working together as a pair. It's not perfectly well defined (because we know estimation always is inaccurate) but from our past shared experience we have a common sense of what an IDS is. If we can't estimate a task because it feels too long for a week we break it down into estimatable smaller tasks. During a short meeting on Wednesday we commit to a workload we feel is well doable in a week. We commit to complete. If a team runs out of committed tasks during the week, it can pick new ones from the prioritized queue we have in Jira. When we started working this way, some of us found that remote pair programming takes a lot of energy because you are so focussed. If you pair program for more than 5 or 6 hours per day, you get drained. On the other hand working like this has turned out to be very efficient. The knowledge about our codebase is evenly distributed and we have really learnt lots from eachother. I would be very interested to hear about the experiences from other teams working in a similar way. Things like: How often do you meet? Have you tried different sprint lengths (one week, two week, longer) ? Which tools do you use? Which issue tracker do you use? What do you do about time zone differences? How does it work for you to integrate new people into the team? How many hours do you usually work per week? How does your management interact with the way you are working? Do you get put on a waterfall with hard deadlines? What's your unit of work? What is your normal velocity? (units of work done per week) Programming work should be fun and for us it usually is great fun. I would be happy about any new ideas how to make it even more fun and/or more efficient.

    Read the article

  • Does Facebook have a list of icons that others can use?

    - by beatak
    For Facebook's 16x16 pixel icon, the following url is used fairly a lot, but this isn't stated by Facebook that people can use this, is it? http://static.ak.fbcdn.net/images/icons/favicon.gif Is there any official list of icons from Facebook that people can use their own site for Facebook Connect?

    Read the article

  • Nested WHILE loops not acting as expected - Javascript / Google Apps Script

    - by dthor
    I've got a function that isn't acting as intended. Before I continue, I'd like preface this with the fact that I normally program in Mathematica and have been tasked with porting over a Mathematica function (that I wrote) to JavaScript so that it can be used in a Google Docs spreadsheet. I have about 3 hours of JavaScript experience... The entire (small) project is calculating the Gross Die per Wafer, given a wafer and die size (among other inputs). The part that isn't working is where I check to see if any corner of the die is outside of the effective radius, Reff. The function takes a list of X and Y coordinates which, when combined, create the individual XY coord of the center of the die. That is then put into a separate function "maxDistance" that calculates the distance of each of the 4 corners and returns the max. This max value is checked against Reff. If the max is inside the radius, 1 is added to the die count. // Take a list of X and Y values and calculate the Gross Die per Wafer function CoordsToGDW(Reff,xSize,ySize,xCoords,yCoords) { // Initialize Variables var count = 0; // Nested loops create all the x,y coords of the die centers for (var i = 0; i < xCoords.length; i++) { for (var j = 0; j < yCoords.length, j++) { // Add 1 to the die count if the distance is within the effective radius if (maxDistance(xCoords[i],yCoords[j],xSize,ySize) <= Reff) {count = count + 1} } } return count; } Here are some examples of what I'm getting: xArray={-52.25, -42.75, -33.25, -23.75, -14.25, -4.75, 4.75, 14.25, 23.75, 33.25, 42.75, 52.25, 61.75} yArray={-52.5, -45.5, -38.5, -31.5, -24.5, -17.5, -10.5, -3.5, 3.5, 10.5, 17.5, 24.5, 31.5, 38.5, 45.5, 52.5, 59.5} CoordsToGDW(45,9.5,7.0,xArray,yArray) returns: 49 (should be 72) xArray={-36, -28, -20, -12, -4, 4, 12, 20, 28, 36, 44} yArray={-39, -33, -27, -21, -15, -9, -3, 3, 9, 15, 21, 27, 33, 39, 45} CoordsToGDW(32.5,8,6,xArray,yArray) returns: 39 (should be 48) I know that maxDistance() is returning the correct values. So, where's my simple mistake? Also, please forgive me writing some things in Mathematica notation... Edit #1: A little bit of formatting. Edit #2: Per showi, I've changed WHILE loops to FOR loops and replaced <= with <. Still not the right answer. It did clean things up a bit though... Edit #3: What I'm essentially trying to do is take [a,b] and [a,b,c] and return [[a,a],[a,b],[a,c],[b,a],[b,b],[b,c]]

    Read the article

  • Strategy for Storing Multiple Nullable Booleans in SQL

    - by Eric J.
    I have an object (happens to be C#) with about 20 properties that are nullable booleans. There will be perhaps a few million such objects persisted to a SQL database (currently SQL Server 2008 R2, but MySQL may need to be supported in the future). The instances themselves are relatively large because they contain about a paragraph of text as well as some other unrelated properties. For a given object instance, most of the properties will be null most of the time. When users search for instances of such objects, they will select perhaps 1-3 of the nullable boolean properties and search for instances where at least one of those 1-3 properties is non-null (OR search). My first thought is to persist the object to a single table with nullable BIT columns representing the nullable boolean properties. However, this strategy will require one index per BIT column to avoid performing a table scan when searching. Further, each index would not be particularly selective since there are only three possible values per index. Is there a better way to approach this problem?

    Read the article

  • How can we block the user from unchecking a DataGridView checkbox?

    - by hawbsl
    We have a DataGridViewCheckBox column bound to a boolean property in our class. The property setter has some logic which says that under certain conditions a True flag cannot be changed, ie, it stays checked forever. This is on a per record basis. So the entire column can't be readonly, only certain rows. Pseudo code: Public Property Foo() As Boolean Get Return _Foo End Get Set(ByVal value As Boolean) If _Foo And Bar And value = False Then //do nothing, in this scenario once you're true, you stay true Else _Foo = value End If End Set End Property Databinding is handling all of this fine, except that the checkbox is visibly cleared when it's clicked. Then, of course, when the binding / setter is fired (as you move off that cell) it is restored to its checked status per the underlying logic. Ultimately it doesn't matter too much but it's a clumsy bit of UI. How can we intercept the user's click and keep it checked?

    Read the article

  • Large file download for a Rails project

    - by Horace Ho
    One client project will be online two months later. One of the requirements changed is to support large files (10 to 15MB per RAW camera file, expected 1000 to 5000 files download per day) download worldwide for their customers. The process will be: there is upload screen via paperclip to the rails local public folder a hourly task to upload to web storage (S3?) update the download url from paperclip url to the web url Questions: is there a gem/plug-in for this purpose? if no, any gem/plug-in for S3 to recommend? Questions about the storage provider: is S3 recommended? or other service to recommend? The baseline is: the client's web server does not and will not have the bandwidth to handle the downloads. Thanks

    Read the article

  • MFC/WIN32: mouse hover highlight in listctrl

    - by Mordachai
    The ListView control of Windows Explorer gives a highlight to whatever item is under the mouse, without affecting the current selection. This helps enormously with relating what item a given tooltip applies to within a listview - especially in report mode. However, I am currently unable to find any APIs that would give my MFC application's CListCtrl that same behavior. Extended styles only have LVS_EX_TRACKSELECT, which actually alters the current selection (yuck!). Does anyone know how to provide a standard CListCtrl (or whatever that actually sits on top of) the mouse-hot-tracking capability? I found some articles on how to provide per cell and per row tooltip text, but its hard to tell what the tooltips relate to without something highlighting...

    Read the article

  • MySQL Split Time Ranges into Smaller Chunks

    - by Neren
    Hello all, I've recently been tasked with finishing a PHP/MySQL web app when the developer quit last week. I'm no MySQL expert, so I apologize if this is an intensely simple question. I've searched SO for the better part of two days trying to find a relatively easy solution to my problem, which is as follows. Problem in a Nutshell: I have a MySQL table full of start and end datetime (GMT -5) & UNIX Timestamp values covering durations of irregular length and need to break/split/divide them into more-regular time chunks (5 minutes). I'm not after a count of row entries per time chunk/bucket/period, if that makes any sense. Data Example: started, ended, started_UNIX, ended_UNIX 2010-10-25 15:12:33, 2010-10-25 15:47:09, 1288033953, 1288036029 What I'm hoping to get: 2010-10-25 15:12:33, 2010-10-25 15:15:00, 1288033953, 1288037700 2010-10-25 15:15:00, 2010-10-25 15:20:00, 1288037700, 1288038000 2010-10-25 15:20:00, 2010-10-25 15:25:00, 1288038000, 1288038300 2010-10-25 15:25:00, 2010-10-25 15:30:00, 1288038300, 1288038600 2010-10-25 15:30:00, 2010-10-25 15:35:00, 1288038600, 1288038900 2010-10-25 15:35:00, 2010-10-25 15:40:00, 1288038900, 1288039200 2010-10-25 15:40:00, 2010-10-25 15:45:00, 1288039200, 1288039500 2010-10-25 15:45:00, 2010-10-25 15:47:09, 1288039500, 1288039629 If you're interested, here's the quick & dirty on the app and why I need the data: App overview: The application receives very simple POST requests generated by a basic sensor device when its input pins go to ground, which submits an INSERT query to the database where MySQL records a timestamp (as started). When the input pins return from a grounded state, the device submits a different POST request, which causes the PHP app to submit an UPDATE query, where a modification time timestamp is inserted (as ended). My employer recently changed the periodic reporting unit of measure from Seconds "On" Per Day to Seconds "On" Per 5 Minute Interval. I had formulated what I thought would be a workable solution, but when I looked at it on paper, it looked like Rube Goldberg's nightmare constructed in MySQL, so that was out. Any suggestions as to how to break these spans into 5 minute blocks? Keeping it all in MySQL would be my preference, though I'll take any suggestions. Thank you for any suggestions you may have. Again, I apologize if this is a no-brainer. If I ask any additional questions of the SO collective consciousness in the future, I'll try to word them a bit better. Any help will be happily welcomed. Thanks, Neren

    Read the article

  • Finding open contiguous blocks of time for every day of a month, fast

    - by Chris
    I am working on a booking availability system for a group of several venues, and am having a hard time generating the availability of time blocks for days in a given month. This is happening server-side in PHP, but the concept itself is language agnostic -- I could be doing this in JS or anything else. Given a venue_id, month, and year (6/2012 for example), I have a list of all events occurring in that range at that venue, represented as unix timestamps start and end. This data comes from the database. I need to establish what, if any, contiguous block of time of a minimum length (different per venue) exist on each day. For example, on 6/1 I have an event between 2:00pm and 7:00pm. The minimum time is 5 hours, so there's a block open there from 9am - 2pm and another between 7pm and 12pm. This would continue for the 2nd, 3rd, etc... every day of June. Some (most) of the days have nothing happening at all, some have 1 - 3 events. The solution I came up with works, but it also takes waaaay too long to generate the data. Basically, I loop every day of the month and create an array of timestamps for each 15 minutes of that day. Then, I loop the time spans of events from that day by 15 minutes, marking any "taken" timeslot as false. Remaining, I have an array that contains timestamp of free time vs. taken time: //one day's array after processing through loops (not real timestamps) array( 12345678=>12345678, // <--- avail 12345878=>12345878, 12346078=>12346078, 12346278=>false, // <--- not avail 12346478=>false, 12346678=>false, 12346878=>false, 12347078=>12347078, // <--- avail 12347278=>12347278 ) Now I would need to loop THIS array to find continuous time blocks, then check to see if they are long enough (each venue has a minimum), and if so then establish the descriptive text for their start and end (i.e. 9am - 2pm). WHEW! By the time all this looping is done, the user has grown bored and wandered off to Youtube to watch videos of puppies; it takes ages to so examine 30 or so days. Is there a faster way to solve this issue? To summarize the problem, given time ranges t1 and t2 on day d, how can I determine the remaining time left in d that is longer than the minimum time block m. This data is assembled on demand via AJAX as the user moves between calendar months. Results are cached per-page-load, so if the user goes to July a second time, the data that was generated the first time would be reused. Any other details that would help, let me know. Edit Per request, the database structure (or the part that is relevant here) *events* id (bigint) title (varchar) *event_times* id (bigint) event_id (bigint) venue_id (bigint) start (bigint) end (bigint) *venues* id (bigint) name (varchar) min_block (int) min_start (varchar) max_start (varchar)

    Read the article

  • Should I aim for fewer HTTP requests or more cacheable CSS files?

    - by Jonathan Hanson
    We're being told that fewer HTTP requests per page load is a Good Thing. The extreme form of that for CSS would be to have a single, unique CSS file per page, with any shared site-wide styles duplicated in each file. But there's a trade off there. If you have separate shared global CSS files, they can be cached once when the front page is loaded and then re-used on multiple pages, thereby reducing the necessary size of the page-specific CSS files. So which is better in real-world practice? Shorter CSS files through multiple discrete CSS files that are cacheable, or fewer HTTP requests through fewer-but-larger CSS files?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >