Search Results

Search found 13249 results on 530 pages for 'virtualized performance'.

Page 479/530 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • Getting the Item Count of a large sharepoint list in fastest way

    - by sooraj
    I am trying to get the count of the items in a sharepoint document library programatically. The scale I am working with is 30-70000 items. We have usercontrol in a smartpart to display the count . Ours is a TEAM site. This is the code to get the total count SPList VoulnterrList = web.Lists[ListTitle]; SPQuery query = new SPQuery(); query.ViewAttributes = "Scope=\"Recursive\""; string queries = "<Where><Eq><FieldRef Name='ApprovalStatus' /><Value Type='Choice'>Pending</Value></Eq></Where>"; query.Query = queries; SPListItemCollection lstitemcollAssoID = VoulnterrList.GetItems(query); lblCount.Text = "Total Proofs: " + VoulnterrList.Items.Count.ToString() + " Pending Proofs: " + lstitemcollAssoID.Count.ToString(); The problem is this has serious performance issue it takes 75 to 80 sec to load the page. if we comment this page load will decrees to 4 sec. Any better approch for this problem Ours is sharepoint 2007

    Read the article

  • Merging two arrays in PHP

    - by Industrial
    Hi everyone, I am trying to create a new array from two current arrays. Tried array_merge, but it will not give me what I want. $array1 is a list of keys that I pass to a function. $array2 holds the results from that function, but doesn't contain any non-available resuls for keys. So, I want to make sure that all requested keys comes out with 'null':ed values, as according to the shown $result array. It goes a little something like this: $array1 = array('item1', 'item2', 'item3', 'item4'); $array2 = array( 'item1' => 'value1', 'item2' => 'value2', 'item3' => 'value3' ); Here's the result I want: $result = array( 'item1' => 'value1', 'item2' => 'value2', 'item3' => 'value3', 'item4' => '' ); It can be done this way, but I don't think that it's a good solution - I really don't like to take the easy way out and suppress PHP errors by adding @:s in the code. This sample would obviously throw errors since 'item4' is not in $array2, based on the example. foreach ($keys as $k => $v){ @$array[$v] = $items[$v]; } So, what's the fastest (performance-wise) way to accomplish the same result?

    Read the article

  • class selector refuses after append to body

    - by supersize
    I'm appending loads of divs in a wrapper: var cubes = [], allCubes = '', for(var i = 0; i < 380; i++) { var randomleft = Math.floor(Math.random()*Math.floor(Math.random()*1000)), randomtop = Math.floor(Math.random()*Math.floor(Math.random()*1000)); allCubes += '<div id="cube'+i+'" class="cube" style="position: absolute; border: 2px #000 solid; left: '+randomleft+'px; top: '+randomtop+'px; width: 9px; height: 9px; z-index: -1"></div>'; } $('#wrapper').append(allCubes); // performance for(var i = 0; i < 380; i++) { cubes.push($('#cube'+i)); } and then I would like to make them all draggable with jQueryUI and log their current position. var allc = $('.cube'); allc.draggable().on('mouseup', function(i) { allc.each(function() { var nleft = $(this).offset().left; var ntop = $(this).offset().top; console.log('cubes['+i+'].animate({ left:'+nleft+',top:'+ntop+'})'); }); }); Unfortunenately it does not work. They are neither draggable nor there comes up a log. Thanks

    Read the article

  • Using map() on a _set in a template?

    - by Stuart Grimshaw
    I have two models like this: class KPI(models.Model): """KPI model to hold the basic info on a Key Performance Indicator""" title = models.CharField(blank=False, max_length=100) description = models.TextField(blank=True) target = models.FloatField(blank=False, null=False) group = models.ForeignKey(KpiGroup) subGroup = models.ForeignKey(KpiSubGroup, null=True) unit = models.TextField(blank=True) owner = models.ForeignKey(User) bt_measure = models.BooleanField(default=False) class KpiHistory(models.Model): """A historical log of previous KPI values.""" kpi = models.ForeignKey(KPI) measure = models.FloatField(blank=False, null=False) kpi_date = models.DateField() and I'm using RGraph to display the stats on internal wallboards, the handy thing is Python lists get output in a format that Javascript sees as an array, so by mapping all the values into a list like this: def f(x): return float(x.measure) stats = map(f, KpiHistory.objects.filter(kpi=1) then in the template I can simply use {{ stats }} and the RGraph code sees it as an array which is exactly what I want. [87.0, 87.5, 88.5, 90] So my question is this, is there any way I can achieve the same effect using Django's _set functionality to keep the amount of data I'm passing into the template, up until now I've been passing in a single KPI object to be graphed but now I want to pass in a whole bunch so is there anything I can do with _set {{ kpi.kpihistory_set }} dumps the whole model out, but I just want the measure field. I can't see any of the built in template methods that will let me pull out just the single field I want. How have other people handled this situation?

    Read the article

  • JavaScript array random index insertion and deletion

    - by Tomi
    I'm inserting some items into array with randomly created indexes, for example like this: var myArray = new Array(); myArray[123] = "foo"; myArray[456] = "bar"; myArray[789] = "baz"; ... In other words array indexes do not start with zero and there will be "numeric gaps" between them. My questions are: Will these numeric gaps be somehow allocated (and therefore take some memory) even when they do not have assigned values? When I delete myArray[456] from upper example, would items below this item be relocated? EDIT: Regarding my question/concern about relocation of items after insertion/deletion - I want to know what happens with the memory and not indexes. More information from wikipedia article: Linked lists have several advantages over dynamic arrays. Insertion of an element at a specific point of a list is a constant-time operation, whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration.

    Read the article

  • Is there a faster way to access a property member of a class using reflection?

    - by Ross Goddard
    I am currently using the following code to access the property of an object using reflection: Dim propInfo As Reflection.PropertyInfo = myType.GetProperty(propName) Dim objValue As Object = propInfo.GetValue(myObject, Nothing) I am having some issues with the speed since this type of code is being called many times and is causing some slowdown. I have been looking into using Refelction.Emit or dynamic methods, but I am not sure exactly how to make use of them. Background Information: I am creating a list of a subset of the properties of the object, associating then with some meta information (such as if they can be loaded from the database or xml, if they are editable, can the user see them). This is for later consumption so we can write code such as : foreach prop as BaseWrapper in graphNode.NodeProperties prop.LoadFromDataRow(dr) next The application makes heavy use of having access to this list. The problem is that on the initial load of a project, a larger number of objects are being created that make use of this, so for each object created it is looping through this code a number of times. I initially tried adding each property to the list manually, but this ran into problems with not everything being initialized at the correct time and some other issues. If there is no other good way, then I may have to rethink some of the design and see what else can be done to improve the performance.

    Read the article

  • Detecting well behaved / well known bots

    - by Simon_Weaver
    I found this question very interesting : Programmatic Bot Detection I have a very similar question, but I'm not bothered about 'badly behaved bots'. I am tracking (in addition to google analytics) the following per visit : Entry URL Referer UserAgent Adwords (by means of query string) Whether or not the user made a purchase etc. The problem is that to calculate any kind of conversion rate I'm ending up with lots of 'bot' visits that are greatly skewing my results. I'd like to ignore as many as possible bot visits, but I want a solution that I don't need to monitor too closely, and that won't in itself be a performance hog and preferably still work if someone has javascript disabled. Are there good published lists of the top 100 bots or so? I did find a list at http://www.user-agents.org/ but that appears to contain hundreds if not thousands of bots. I don't want to check every referer against thousands of links. Here is the current googlebot UserAgent. How often does it change? Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

    Read the article

  • Do I need a spatial index in my database?

    - by Sanoj
    I am designing an application that needs to save geometric shapes in a database. I haven't choosen the database management system yet. In my application, all database queries will have an bounding box as input, and as output I want all shapes within that database. I know that databases with a spatial index is used for this kind of application. But in my application there will not be any queries of type "give me objects nearby x/y" or other more complex queries that are useful in a GIS application. I am planning of having a database without a spatial index and have queries looking like: SELECT * FROM shapes WHERE x < max_x AND x > min_x AND y < max_y AND y > min_y And have an index on the columns x (double) and y (double). As long I can see, I don't really need a database with an spatial index, howsoever my application is close to that kind of applications. And even if I would like to have nearby queries, then I could create a big enough bounding box around that point. Or will this lead to poor performance? Do I really need a spatial database? And when is a spatial index needed?

    Read the article

  • Tracking down origin of I/O in multi-process server

    - by Craig Ringer
    I'm currently trying to track down some phantom I/O in a PostgreSQL build I'm testing. It's a multi-process server and it isn't simple to associate disk I/O back to a particular back-end and query. I thought Linux's perf tool would be ideal for this, but I'm struggling to capture block I/O performance counter metrics and associate them with user-space activity. It's easy to record block I/O requests and completions with, eg: sudo perf record -g -T -u postgres -e 'block:block_rq_*' and the user-space pid is recorded, but there's no kernel or user-space stack captured, or ability to snapshot bits of the user-space process's heap (say, query text) etc. So while you have the pid, you don't know what the process was doing at that point. Just perf script output like: postgres 7462 [002] 301125.113632: block:block_rq_issue: 8,0 W 0 () 208078848 + 1024 [postgres] If I add the -g flag to perf record it'll take snapshots of the kernel stack, but doesn't capture user-space state for perf events captured in the kernel. The user-space stack only goes up to the entry-point from userspace, like LWLockRelease, LWLockAcquire, memcpy (mmap'd IO), __GI___libc_write, etc. So. Any tips? Being able to capture a snapshot of the user-space stack in response to kernel events would be ideal.

    Read the article

  • Aliasing `T*` with `char*` is allowed. Is it also allowed the other way around?

    - by StackedCrooked
    Note: This question has been renamed and reduced to make it more focused and readable. Most of the comments refer to the old text. According to the standard objects of different type may not share the same memory location. So this would not be legal: int i = 0; short * s = reinterpret_cast<short*>(&i); // BAD! The standard however allows an exception to this rule: any object may be accessed through a pointer to char or unsigned char: int i = 0; char * c = reinterpret_cast<char*>(&i); // OK However, it is not clear to me if this is also allowed the other way around. For example: char * c = read_socket(...); unsigned * u = reinterpret_cast<unsigned*>(c); // huh? Summary of the answers The answer is NO for two reasons: You an only access an existing object as char*. There is no object in my sample code, only a byte buffer. The pointer address may not have the right alignment for the target object. In that case dereferencing it would result in undefined behavior. On the Intel and AMD platforms it will result performance overhead. On ARM it will trigger a CPU trap and your program will be terminated! This is a simplified explanation. For more detailed information see answers by @Luc Danton, @Cheers and hth. - Alf and @David Rodríguez.

    Read the article

  • Grouping by date, with 0 when count() yields no lines

    - by SCO
    I'm using Postgresql 9 and I'm fighting with counting and grouping when no lines are counted. Let's assume the following schema : create table views { date_event timestamp with time zone ; event_id integer; } Let's imagine the following content : 2012-01-01 00:00:05 2 2012-01-01 01:00:05 5 2012-01-01 03:00:05 8 2012-01-01 03:00:15 20 I want to group by hour, and count the number of lines. I wish I could retrieve the following : 2012-01-01 00:00:00 1 2012-01-01 01:00:00 1 2012-01-01 02:00:00 0 2012-01-01 03:00:00 2 2012-01-01 04:00:00 0 2012-01-01 05:00:00 0 . . 2012-01-07 23:00:00 0 I mean that for each time range slot, I count the number of lines in my table whose date correspond, otherwise, I return a line with a count at zero. The following will definitely not work (will yeld only lines with counted lines 0). SELECT extract ( hour from date_event ),count(*) FROM views where date_event > '2012-01-01' and date_event <'2012-01-07' GROUP BY extract ( hour from date_event ); Please note I might also need to group by minute, or by hour, or by day, or by month, or by year (multiple queries is possible of course). I can only use plain old sql, and since my views table can be very big (100M records), I try to keep performance in mind. How can this be achieved ? Thank you !

    Read the article

  • What is better for a student programming in C++ to learn for writing GUI: C# vs QT?

    - by flashnik
    I'm a teacher(instructor) of CS in the university. The course is based on Cormen and Knuth and students program algorithms in C++. But sometimes it is good to show how an algorithm works or just a result of task through GUI. Also in my opinion it's very imporant to be able to write full programs. They will have courses concerning GUI but a three years, later, in fact, before graduatuion. I think that they should be able to write simple GUI applications earlier. So I want to teach them it. How do you think, what is more useful for them to learn: programming GUI with QT or writing GUI in C# and calling unmanaged C++ library? Update. For developing C++ applications students use MS Visual studio, so C# is already installed. But QT AFAIK also can be integrated into VS. I have following pros of C# (some were suggested there in answers): The need to make an additional layer. It's more work, but it forces you explicitly specify contract between GUI and processing data. The border between GUI and algorithms becomes very clear. It's more popular among employers. At least, in Russia where we live. It's rather common to write performance-critical algorithms in C++ and PInvoke them from well-looking C# application/ASP.Net website. Maybe it is not so widespread in the rest of the world but in Russia Windows is very popular, especially in companies and corporations due to some reasons, so most of b2b applications are Windows applications. Rapid development. It's much quicker to code in .Net then in C++ due to many reasons. And the con is that it's a new language with own specific for students. And the mess with invoking calls to library.

    Read the article

  • Using a single texture image unit with multiple sampler uniforms

    - by bcrist
    I am writing a batching system which tracks currently bound textures in order to avoid unnecessary glBindTexture() calls. I'm not sure if I need to keep track of which textures have already been used by a particular batch so that if a texture is used twice, it will be bound to a different TIU for the second sampler which requires it. Is it acceptable for an OpenGL application to use the same texture image unit for multiple samplers within the same shader stage? What about samplers in different shader stages? For example: Fragment shader: ... uniform sampler2D samp1; uniform sampler2D samp2; void main() { ... } Main program: ... glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, tex_id); glUniform1i(samp1_location, 0); glUniform1i(samp2_location, 0); ... I don't see any reason why this shouldn't work, but what about if the shader program also included a vertex shader like this: Vertex shader: ... uniform sampler2D samp1; void main() { ... } In this case, OpenGL is supposed to treat both instances of samp1 as the same variable, and exposes a single location for them. Therefore, the same texture unit is being used in the vertex and fragment shaders. I have read that using the same texture in two different shader stages counts doubly against GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS but this would seem to contradict that. In a quick test on my hardware (HD 6870), all of the following scenarios worked as expected: 1 TIU used for 2 sampler uniforms in same shader stage 1 TIU used for 1 sampler uniform which is used in 2 shader stages 1 TIU used for 2 sampler uniforms, each occurring in a different stage. However, I don't know if this is behavior that I should expect on all hardware/drivers, or if there are performance implications.

    Read the article

  • Type-safe generic data structures in plain-old C?

    - by Bradford Larsen
    I have done far more C++ programming than "plain old C" programming. One thing I sorely miss when programming in plain C is type-safe generic data structures, which are provided in C++ via templates. For sake of concreteness, consider a generic singly linked list. In C++, it is a simple matter to define your own template class, and then instantiate it for the types you need. In C, I can think of a few ways of implementing a generic singly linked list: Write the linked list type(s) and supporting procedures once, using void pointers to go around the type system. Write preprocessor macros taking the necessary type names, etc, to generate a type-specific version of the data structure and supporting procedures. Use a more sophisticated, stand-alone tool to generate the code for the types you need. I don't like option 1, as it is subverts the type system, and would likely have worse performance than a specialized type-specific implementation. Using a uniform representation of the data structure for all types, and casting to/from void pointers, so far as I can see, necessitates an indirection that would be avoided by an implementation specialized for the element type. Option 2 doesn't require any extra tools, but it feels somewhat clunky, and could give bad compiler errors when used improperly. Option 3 could give better compiler error messages than option 2, as the specialized data structure code would reside in expanded form that could be opened in an editor and inspected by the programmer (as opposed to code generated by preprocessor macros). However, this option is the most heavyweight, a sort of "poor-man's templates". I have used this approach before, using a simple sed script to specialize a "templated" version of some C code. I would like to program my future "low-level" projects in C rather than C++, but have been frightened by the thought of rewriting common data structures for each specific type. What experience do people have with this issue? Are there good libraries of generic data structures and algorithms in C that do not go with Option 1 (i.e. casting to and from void pointers, which sacrifices type safety and adds a level of indirection)?

    Read the article

  • Iterate over defined elements of a JS array

    - by sibidiba
    I'm using a JS array to Map IDs to actual elements, i.e. a key-value store. I would like to iterate over all elements. I tried several methods, but all have its caveats: for (var item in map) {...} Does iterates over all properties of the array, therefore it will include also functions and extensions to Array.prototype. For example someone dropping in the Prototype library in the future will brake existing code. var length = map.lenth; for (var i = 0; i < length; i++) { var item = map[i]; ... } does work but just like $.each(map, function(index, item) {...}); They iterate over the whole range of indexes 0..max(id) which has horrible drawbacks: var x = []; x[1]=1; x[10]=10; $.each(x, function(i,v) {console.log(i+": "+v);}); 0: undefined 1: 1 2: undefined 3: undefined 4: undefined 5: undefined 6: undefined 7: undefined 8: undefined 9: undefined 10: 10 Of course my IDs wont resemble a continuous sequence either. Moreover there can be huge gaps between them so skipping undefined in the latter case is unacceptable for performance reasons. How is it possible to safely iterate over only the defined elements of an array (in a way that works in all browsers and IE)?

    Read the article

  • Do I need to using locking against integers in c++ threads

    - by Shane MacLaughlin
    The title says it all really. If I am accessing a single integer type (e.g. long, int, bool, etc...) in multiple threads, do I need to use a synchronisation mechanism such as a mutex to lock them. My understanding is that as atomic types, I don't need to lock access to a single thread, but I see a lot of code out there that does use locking. Profiling such code shows that there is a significant performance hit for using locks, so I'd rather not. So if the item I'm accessing corresponds to a bus width integer (e.g. 4 bytes on a 32 bit processor) do I need to lock access to it when it is being used across multiple threads? Put another way, if thread A is writing to integer variable X at the same time as thread B is reading from the same variable, is it possible that thread B could end up a few bytes of the previous value mixed in with a few bytes of the value being written? Is this architecture dependent, e.g. ok for 4 byte integers on 32 bit systems but unsafe on 8 byte integers on 64 bit systems? Edit: Just saw this related post which helps a fair bit.

    Read the article

  • Faster way to convert from a String to generic type T when T is a valuetype?

    - by Kumba
    Does anyone know of a fast way in VB to go from a string to a generic type T constrained to a valuetype (Of T as Structure), when I know that T will always be some number type? This is too slow for my taste: Return DirectCast(Convert.ChangeType(myStr, GetType(T)), T) But it seems to be the only sane method of getting from a String -- T. I've tried using Reflector to see how Convert.ChangeType works, and while I can convert from the String to a given number type via a hacked-up version of that code, I have no idea how to jam that type back into T so it can be returned. I'll add that part of the speed penalty I'm seeing (in a timing loop) is because the return value is getting assigned to a Nullable(Of T) value. If I strongly-type my class for a specific number type (i.e., UInt16), then I can vastly increase the performance, but then the class would need to be duplicated for each numeric type that I use. It'd almost be nice if there was converter to/from T while working on it in a generic method/class. Maybe there is and I'm oblivious to its existence?

    Read the article

  • Using same log4j logger in standalone java application

    - by ktaylorjohn
    I have some code which is a standalone java application comprising of 30+ classes. Most of these inherit from some other base classes. Each and every class has this method to get and use a log4j logger public static Logger getLogger() { if (logger != null) return logger; try { PropertyUtil propUtil = PropertyUtil.getInstance("app-log.properties"); if (propUtil != null && propUtil.getProperties() != null) PropertyConfigurator.configure(propUtil.getProperties ()); logger = Logger.getLogger(ExtractData.class); return logger; } catch (Exception exception) { exception.printStackTrace(); } } A) My question is whether this should be refactored to some common logger which is initialized once and used across by all classes? Is that a better practice? B) If yes, how can this be done ? How can I pass the logger around ? C) This is actually being used in the code not as Logger.debug() but getLogger().debug(). What is the impact of this in terms of performance?

    Read the article

  • Is there a Java data structure that is effectively an ArrayList with double indicies and built-in in

    - by Bob Cross
    I am looking for a pre-built Java data structure with the following characteristics: It should look something like an ArrayList but should allow indexing via double-precision rather than integers. Note that this means that it's likely that you'll see indicies that don't line up with the original data points (i.e., asking for the value that corresponds to key "1.5"). As a consequence, the value returned will likely be interpolated. For example, if the key is 1.5, the value returned could be the average of the value at key 1.0 and the value at key 2.0. The keys will be sorted but the values are not ensured to be monotonically increasing. In fact, there's no assurance that the first derivative of the values will be continuous (making it a poor fit for certain types of splines). Freely available code only, please. For clarity, I know how to write such a thing. In fact, we already have an implementation of this and some related data structures in legacy code that I want to replace due to some performance and coding issues. What I'm trying to avoid is spending a lot of time rolling my own solution when there might already be such a thing in the JDK, Apache Commons or another standard library. Frankly, that's exactly the approach that got this legacy code into the situation that it's in right now.... Is there such a thing out there in a freely available library?

    Read the article

  • Optimising ruby regexp -- lots of match groups

    - by Farcaller
    I'm working on a ruby baser lexer. To improve performance, I joined up all tokens' regexps into one big regexp with match group names. The resulting regexp looks like: /\A(?<__anonymous_-1038694222803470993>(?-mix:\n+))|\A(?<__anonymous_-1394418499721420065>(?-mix:\/\/[\A\n]*))|\A(?<__anonymous_3077187815313752157>(?-mix:include\s+"[\A"]+"))|\A(?<LET>(?-mix:let\s))|\A(?<IN>(?-mix:in\s))|\A(?<CLASS>(?-mix:class\s))|\A(?<DEF>(?-mix:def\s))|\A(?<DEFM>(?-mix:defm\s))|\A(?<MULTICLASS>(?-mix:multiclass\s))|\A(?<FUNCNAME>(?-mix:![a-zA-Z_][a-zA-Z0-9_]*))|\A(?<ID>(?-mix:[a-zA-Z_][a-zA-Z0-9_]*))|\A(?<STRING>(?-mix:"[\A"]*"))|\A(?<NUMBER>(?-mix:[0-9]+))/ I'm matching it to my string producing a MatchData where exactly one token is parsed: bigregex =~ "\n ... garbage" puts $~.inspect Which outputs #<MatchData "\n" __anonymous_-1038694222803470993:"\n" __anonymous_-1394418499721420065:nil __anonymous_3077187815313752157:nil LET:nil IN:nil CLASS:nil DEF:nil DEFM:nil MULTICLASS:nil FUNCNAME:nil ID:nil STRING:nil NUMBER:nil> So, the regex actually matched the "\n" part. Now, I need to figure the match group where it belongs (it's clearly visible from #inspect output that it's _anonymous-1038694222803470993, but I need to get it programmatically). I could not find any option other than iterating over #names: m.names.each do |n| if m[n] type = n.to_sym resolved_type = (n.start_with?('__anonymous_') ? nil : type) val = m[n] break end end which verifies that the match group did have a match. The problem here is that it's slow (I spend about 10% of time in the loop; also 8% grabbing the @input[@pos..-1] to make sure that \A works as expected to match start of string (I do not discard input, just shift the @pos in it). You can check the full code at GH repo. Any ideas on how to make it at least a bit faster? Is there any option to figure the "successful" match group easier?

    Read the article

  • Color banding only on Android 4.0+

    - by threeshinyapples
    On emulators running Android 4.0 or 4.0.3, I am seeing horrible colour banding which I can't seem to get rid of. On every other Android version I have tested, gradients look smooth. I have a SurfaceView which is configured as RGBX_8888, and the banding is not present in the rendered canvas. If I manually dither the image by overlaying a noise pattern at the end of rendering I can make the gradients smooth again, though obviously at a cost to performance which I'd rather avoid. So the banding is being introduced later. I can only assume that, on 4.0+, my SurfaceView is being quantized to a lower bit-depth at some point between it being drawn and being displayed, and I can see from a screen capture that gradients are stepping 8 values at a time in each channel, suggesting a quantization to 555 (not 565). I added the following to my Activity onCreate function, but it made no difference. getWindow().setFormat(PixelFormat.RGBA_8888); getWindow().addFlags(WindowManager.LayoutParams.FLAG_DITHER); I also tried putting the above in onAttachedToWindow() instead, but there was still no change. (I believe that RGBA_8888 is the default window format anyway for 2.2 and above, so it's little surprise that explicitly setting that format has no effect on 4.0+.) Which leaves the question, if the source is 8888 and the destination is 8888, what is introducing the quantization/banding and why does it only appear on 4.0+? Very puzzling. I wonder if anyone can shed some light?

    Read the article

  • C Population Count of unsigned 64-bit integer with a maximum value of 15

    - by BitTwiddler1011
    I use a population count (hamming weight) function intensively in a windows c application and have to optimize it as much as possible in order to boost performance. More than half the cases where I use the function I only need to know the value to a maximum of 15. The software will run on a wide range of processors, both old and new. I already make use of the POPCNT instruction when Intel's SSE4.2 or AMD's SSE4a is present, but would like to optimize the software implementation (used as a fall back if no SSE4 is present) as much as possible. Currently I have the following software implementation of the function: inline int population_count64(unsigned __int64 w) { w -= (w 1) & 0x5555555555555555ULL; w = (w & 0x3333333333333333ULL) + ((w 2) & 0x3333333333333333ULL); w = (w + (w 4)) & 0x0f0f0f0f0f0f0f0fULL; return int(w * 0x0101010101010101ULL) 56; } So to summarize: (1) I would like to know if it is possible to optimize this for the case when I only want to know the value to a maximum of 15. (2) Is there a faster software implementation (for both Intel and AMD CPU's) than the function above?

    Read the article

  • Setting pixel values in Nvidia NPP ImageCPU objects?

    - by solvingPuzzles
    In the Nvidia Performance Primitives (NPP) image processing examples in the CUDA SDK distribution, images are typically stored on the CPU as ImageCPU objects, and images are stored on the GPU as ImageNPP objects. boxFilterNPP.cpp is an example from the CUDA SDK that uses these ImageCPU and ImageNPP objects. When using a filter (convolution) function like nppiFilter, it makes sense to define a filter as an ImageCPU object. However, I see no clear way setting the values of an ImageCPU object. npp::ImageCPU_32f_C1 hostKernel(3,3); //allocate space for 3x3 convolution kernel //want to set hostKernel to [-1 0 1; -1 0 1; -1 0 1] hostKernel[0][0] = -1; //this doesn't compile hostKernel(0,0) = -1; //this doesn't compile hostKernel.at(0,0) = -1; //this doesn't compile How can I manually put values into an ImageCPU object? Notes: I didn't actually use nppiFilter in the code snippet; I'm just mentioning nppiFilter as a motivating example for writing values into an ImageCPU object. The boxFilterNPP.cpp example doesn't involve writing directly to an ImageCPU object, because nppiFilterBox is a special case of nppiFilter that uses a built-in gaussian smoothing filter (probably something like [1 1 1; 1 1 1; 1 1 1]).

    Read the article

  • Good Hosting Providers With Zend Framework Support

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for? Also- I put this on SO instead of SF because of the Zend Framework specific requirement. If I'm wrong, do as you wish.

    Read the article

  • How to get a category listing from Magento?

    - by alex
    I want to create a page in Magento that shows a visual representation of the categories.. example CATEGORY product 1 product 2 ANOTHER CATEGORY product 3 My problem is, their database is organised very differently to what I've seen in the past. They have tables dedicated to data types like varchar, int, etc. I assume this is for performance or similiar. I haven't found a way to use MySQL to query the database and get a list of categories. I'd then like to match these categories to products, to get a listing of products for each category. Unfortunately Magento seems to make this very difficult. Also I have not found a method that will work from within a page block.. I have created showcase.phtml and put it in the XML layout and it displays and runs it's PHP code. I was hoping for something easy like looping through $this-getAllCategories(); and then a nested loop inside with something like $category-getChildProducts(); Can anyone help me? Thank you!

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >