Search Results

Search found 9889 results on 396 pages for 'pointer speed'.

Page 338/396 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Programming exercises in Java inheritance for intern

    - by Tenner
    I work for a small software development team, working primarily in Java, for a very large company. Our new intern showed up sight-unseen (not uncommon in my company). He has some C++ experience but no Java. Worse, he's never worked with inheritance in C++. Our code has a great deal of abstraction and a heavy reliance on inheritance. We need to get him up to speed as quickly as possible. Of course the rest of the team is busy, and so we can't take the time out of our day to teach a one-student 200-level CS course. Instead, I'd like to give him an actual programming project to work on which highlights how classes, interfaces, method overrides, etc. work. I've had him look at Project Euler, but most of the solutions end up being procedural, and not object-oriented programs. Do any of you have any somewhat-straightforward (and relatively quick) projects which you would give to an intern in this situation? Or, any recent (or current) students have a school project they'd be willing to share? Anyone else had this experience?

    Read the article

  • What is an efficient strategy for multiple threads posting jobs and waiting for response from a single thread?

    - by jakewins
    In java, what is an efficient solution to the following problem: I have multiple threads (10-20 or so) generating jobs ("Job Creators"), and a single thread capable of performing them ("The worker"). Once a job creator has posted a job, it should wait for the job to finish, yielding no result other than "it's done", before it keeps going. For sending the jobs to the worker thread, I think a ring buffer or similar standard fan-in setup would perhaps be a good approach? But for a Job Creator to find out that her job has been done, I'm not so sure.. The job creators could sleep, and the worker interrupt them when done.. Or each job creator could have an atomic boolean that it checks, and that the worker sets. I dunno, neither of those feel very nice. I'd like to do it with as few (none, if possible) locks as absolutely possible. So to be clear: What I'm looking for is speed, not necessarily simplicity. Does anyone have any suggestions? Links to reading about concurrency strategies would also be very welcome!

    Read the article

  • HTTP Compression problems on IIS7

    - by Jonathan Wood
    I've spent quite a bit of time on this but seem to be going nowhere. I have a large page that I really want to speed up. The obvious place to start seems to be HTTP compression, but I just can't seem to get it to work for me. After considerable searching, I've tried several variations of the code below. It kind of works, but after refreshing the browser, the results seem to fall apart. They were turning to garbage when the page used caching. If I turn off caching, then the page seems right but I lose my CSS formatting (stored in a separate file) and get an error that an included JS file contains invalid characters. Most of the resources I've found on the Web were either very old or focused on accessing IIS directly. My page is running on a shared hosting account and I do not have direct access to IIS7, which it's running on. protected void Application_BeginRequest(object sender, EventArgs e) { // Implement HTTP compression if (Request["HTTP_X_MICROSOFTAJAX"] == null) // Avoid compressing AJAX calls { // Retrieve accepted encodings string encodings = Request.Headers.Get("Accept-Encoding"); if (encodings != null) { // Verify support for or gzip (deflate takes preference) encodings = encodings.ToLower(); if (encodings.Contains("gzip") || encodings == "*") { Response.Filter = new GZipStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "gzip"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } else if (encodings.Contains("deflate")) { Response.Filter = new DeflateStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "deflate"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } } } } Is anyone having better success with this?

    Read the article

  • Who's setting TCP window size down to 0, Indy or Windows?

    - by François
    We have an application server which have been observed sending headers with TCP window size 0 at times when the network had congestion (at a client's site). We would like to know if it is Indy or the underlying Windows layer that is responsible for adjusting the TCP window size down from the nominal 64K in adaptation to the available throughput. And we would be able to act upon it becoming 0 (nothing gets send, users wait = no good). So, any info, link, pointer to Indy code are welcome... Disclaimer: I'm not a network specialist. Please keep the answer understandable for the average me ;-) Note: it's Indy9/D2007 on Windows Server 2003 SP2. More gory details: The TCP zero window cases happen on the middle tier talking to the DB server. It happens at the same moments when end users complain of slowdowns in the client application (that's what triggered the network investigation). 2 major Network issues causing bottlenecks have been identified. The TCP zero window happened when there was network congestion, but may or may not be caused by it. We want to know when that happen and have a way to do something (logging at least) in our code. So where to hook (in Indy?) to know when that condition occurs?

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • How to partition bits in a bit array with less than linear time

    - by SiLent SoNG
    This is an interview question I faced recently. Given an array of 1 and 0, find a way to partition the bits in place so that 0's are grouped together, and 1's are grouped together. It does not matter whether 1's are ahead of 0's or 0's are ahead of 1's. An example input is 101010101, and output is either 111110000 or 000011111. Solve the problem in less than linear time. Make the problem simpler. The input is an integer array, with each element either 1 or 0. Output is the same integer array with integers partitioned well. To me, this is an easy question if it can be solved in O(N). My approach is to use two pointers, starting from both ends of the array. Increases and decreases each pointer; if it does not point to the correct integer, swap the two. int * start = array; int * end = array + length - 1; while (start < end) { // Assume 0 always at the end if (*end == 0) { --end; continue; } // Assume 1 always at the beginning if (*start == 1) { ++start; continue; } swap(*start, *end); } However, the interview insists there is a sub-linear solution. This makes me thinking hard but still not get an answer. Can anyone help on this interview question?

    Read the article

  • How can I modified the value of a string defined in a struc?

    - by Eric
    Hi, I have the following code in c++: define TAM 4000 define NUMPAGS 512 struct pagina { bitset<12 direccion; char operacion; char permiso; string *dato; int numero; }; void crearPagina(pagina* pag[], int pos, int dir) { pagina * paginas = (pagina*)malloc(sizeof(char) * TAM); paginas - direccion = bitset<12 (dir); paginas - operacion = 'n'; paginas - permiso = 'n'; string **tempDato = &paginas - dato; char *temp = " "; **tempDato = temp; paginas - numero = 0; pag[pos] = paginas; } I want to modify the value of the variable called "string *dato" in the struct pagina but, everytime I want to assing a new value, the compiler throws a segmentation fault. In this case I'm using a pointer to string, but I have also tried with a string. In a few words I want to do the following: pagina - dato = "test"; Any idea? Thanks in advance!!!

    Read the article

  • Stopping cookies being set from a domain (aka "cookieless domain") to increase site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • How can I get type information at runtime from a DMP file in a Windbg extension?

    - by pj4533
    This is related to my previous question, regarding pulling objects from a dmp file. As I mentioned in the previous question, I can successfully pull object out of the dmp file by creating wrapper 'remote' objects. I have implemented several of these so far, and it seems to be working well. However I have run into a snag. In one case, a pointer is stored in a class, say of type 'SomeBaseClass', but that object is actually of the type 'SomeDerivedClass' which derives from 'SomeBaseClass'. For example it would be something like this: MyApplication!SomeObject +0x000 field1 : Ptr32 SomeBaseClass +0x004 field2 : Ptr32 SomeOtherClass +0x008 field3 : Ptr32 SomeOtherClass I need someway to find out what the ACTUAL type of 'field1' is. To be more specific, using example addresses: MyApplication!SomeObject +0x000 field1 : 0cae2e24 SomeBaseClass +0x004 field2 : 0x262c8d3c SomeOtherClass +0x008 field3 : 0x262c8d3c SomeOtherClass 0:000> dt SomeBaseClass 0cae2e24 MyApplication!SomeBaseClass +0x000 __VFN_table : 0x02de89e4 +0x038 basefield1 : (null) +0x03c basefield2 : 3 0:000> dt SomeDerivedClass 0cae2e24 MyApplication!SomeDerivedClass +0x000 __VFN_table : 0x02de89e4 +0x038 basefield1 : (null) +0x03c basefield2 : 3 +0x040 derivedfield1 : 357 +0x044 derivedfield2 : timecode_t When I am in WinDbg, I can do this: dt 0x02de89e4 And it will show the type: 0:000> dt 0x02de89e4 SomeDerivedClass::`vftable' Symbol not found. But how do get that inside an extension? Can I use SearchMemory() to look for 'SomeDerivedClass::`vftable'? If you follow my other question, I need this type information so I know what type of wrapper remote classes to create. I figure it might end up being some sort of case-statement, where I have to match a string to a type? I am ok with that, but I still don't know where I can get that string that represents the type of the object in question (ie SomeObject-field1 in the above example).

    Read the article

  • Static Vs Non-Static Method Performance C#

    - by dotnetguts
    Hello All, I have few global methods declared in public class in my asp.net web application. I have habbit of declaring all global methods in public class in following format public static string MethodName(parameters) { } I want to know how it would impact on performance point of view? 1) Which one is Better? Static Method or Non-Static Method? 2) Reason why it is better? Following link shows Non-Static methods are good because, static methods are using locks to be Thread-safe. The always do internally a Monitor.Enter() and Monitor.exit() to ensure Thread-safety. http://bytes.com/topic/c-sharp/answers/231701-static-vs-non-static-function-performance And Following link shows Static Methods are good static methods are normally faster to invoke on the call stack than instance methods. There are several reasons for this in the C# programming language. Instance methods actually use the 'this' instance pointer as the first parameter, so an instance method will always have that overhead. Instance methods are also implemented with the callvirt instruction in the intermediate language, which imposes a slight overhead. Please note that changing your methods to static methods is unlikely to help much on ambitious performance goals, but it can help a tiny bit and possibly lead to further reductions. http://dotnetperls.com/static-method I am little confuse which one to use? Thanks

    Read the article

  • Animation management in COCOS2D iphone.

    - by shreya
    Hi All, I have near about 255 image frames for background animation, 99 frames of enemy sprite and 125 frames of player sprite. All animations are running simultaneously on the screen. That is background animation is running and 4-5 enemies are on the screen are present at a time, also player is there at the same time. Take a look at the code below, CCAnimation *_enemyAnimation = [CCAnimation animationWithName:@"Enemy" delay:0.1f]; for (int i = 1; i<99; i++) { [_enemyAnimation addFrameWithFilename:[NSString stringWithFormat:@"enemy %02d.jpg",i]]; } id action1 = [CCAnimate actionWithAnimation: _enemyAnimation]; [_enemySprite runAction:[CCRepeatForever actionWithAction: action1]]; [self schedule:@selector(BackToGameLogic:) interval:5.0]; This makes my game too slower and consumes memory about 65MB in the allocations. How should I manage my animations so there will be improvement in speed and memory consumption will be reduced?. Please suggest me the way. Thanks.

    Read the article

  • Can some tell me why I am seg faulting in this simple C program?

    - by user299648
    I keep on getting seg faulted after I end my first for loop, and for the life of me I don't why. The file I'm scanning is just 18 strings in 18 lines. I thinks the problem is the way I'm mallocing the double pointer called picks, but I don't know exactly why. I'm am only trying to scanf strings that are less than 15 chars long, so I don't see the problem. Can someone please help. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX_LENGTH 100 int main( int argc,char *argv[] ) { char* string = malloc( 15*sizeof(char) ); char** picks = malloc(15*sizeof(char*)); FILE* pick_file = fopen( argv[l], "r" ); int num_picks; for( num_picks=0 ; fgets( string, MAX_LENGTH, pick_file ) != NULL ; num_picks++ ) { scanf( "%s", picks+num_picks ); } //this is where i seg fault int x; for(x=0; x<num_picks;x++) printf("s\n", picks+x); }

    Read the article

  • Program structure in long running data processing python script

    - by fmark
    For my current job I am writing some long-running (think hours to days) scripts that do CPU intensive data-processing. The program flow is very simple - it proceeds into the main loop, completes the main loop, saves output and terminates: The basic structure of my programs tends to be like so: <import statements> <constant declarations> <misc function declarations> def main(): for blah in blahs(): <lots of local variables> <lots of tightly coupled computation> for something in somethings(): <lots more local variables> <lots more computation> <etc., etc.> <save results> if __name__ == "__main__": main() This gets unmanageable quickly, so I want to refactor it into something more manageable. I want to make this more maintainable, without sacrificing execution speed. Each chuck of code relies on a large number of variables however, so refactoring parts of the computation out to functions would make parameters list grow out of hand very quickly. Should I put this sort of code into a python class, and change the local variables into class variables? It doesn't make a great deal of sense tp me conceptually to turn the program into a class, as the class would never be reused, and only one instance would ever be created per instance. What is the best practice structure for this kind of program? I am using python but the question is relatively language-agnostic, assuming a modern object-oriented language features.

    Read the article

  • Do ORMs normally allow circular relations? If so, how would they handle it?

    - by SeanJA
    I was hacking around trying to make a basic orm that has support for the one => one and one => many relationships. I think I succeeded somewhat, but I am curious about how to handle circular relationships. Say you had something like this: user::hasOne('car'); car::hasMany('wheels'); car::property('type'); wheel::hasOne('car'); You could then do this (theoretically): $u = new user(); echo $u->car->wheels[0]->car->wheels[1]->car->wheels[2]->car->wheels[3]->type; #=> "monster truck" Now, I am not sure why you would want to do this. It seems like it wastes a whole pile of memory and time just to get to something that could have been done in a much shorter way. In my small ORM, I now have 4 copies of the wheel class, and 4 copies of the car class in memory, which causes a problem if I update one of them and save it back to the database, the rest get out of date, and could overwrite the changes that were already made. How do other ORMs handle circular references? Do they even allow it? Do they go back up the tree and create a pointer to one of the parents? DO they let the coder shoot themselves in the foot if they are silly enough to go around in circles?

    Read the article

  • How do I build a filtered_streambuf based on basic_streambuf?

    - by swestrup
    I have a project that requires me to insert a filter into a stream so that outgoing data will be modified according to the filter. After some research, it seems that what I want to do is create a filtered_streambuf like this: template <class StreamBuf class filtered_streambuf: public StreamBuf { ... } And then insert a filtered_streambuf<> into whichever stream I need to be filtered. My problem is that I don't know what invariants I need to maintain while filtering a stream, in order to ensure that Derived classes can work as expected. In particular, I may find I have filtered_streambufs built over other filtered_streambufs. All the various stream inserters, extractors and manipulators work as expected. The trouble is that I just can't seem to work out what the minimal interface is that I need to supply in order to guarantee that an iostream will have what it needs to work correctly. In particular, do I need to fake the movement of the protected pointer variables, or not? Do I need a fake data buffer, or not? Can I just override the public functions, rewriting them in terms of the base streambuf, or is that too simplistic?

    Read the article

  • Invoking a function (main()) from a binary file in C

    - by Dhara Darji
    I have simple c program like, my_bin.c: #include <stdio.h> int main() { printf("Success!\n"); return 0; } I compile it with gcc and got executable: my_bin. Now I want to invoke main (or run this my_bin) using another C program. That I did with mmap and function pointer like this: #include <stdio.h> #include <fcntl.h> #include <sys/mman.h> int main() { void (*fun)(); int fd; int *map; fd = open("./my_bin", O_RDONLY); map = mmap(0, 8378, PROT_READ, MAP_SHARED, fd, 0); fun = map; fun(); return 0; } PS: I went through some tutorial, for how to read binary file and execute. But this gives Seg fault, any help appreciated! Thanks!

    Read the article

  • How is external memory, internal memory, and cache organized?

    - by goldenmean
    Consider a system as follows:= A hardware board having say ARM Cortex-A8 and Neon Vector coprocessor, and Embedded Linux OS running on Cortex-A8. On this environment, if there is some application - say, a video decoder is executing - then: How is it decided that which buffers would be in external memory, which ones would be allocated in internal SRAM, etc. When one says calloc/malloc on such system/code, the pointer returned is from which memory: internal or external? Can a user make buffers to be allocated to the memories of his choice (internal/external)? In ARM architectures, there is another memory called as Tightly coupled memory (TCM). What is that and how can user enable and use it? Can I declare buffers in this memory? Do I need to see the memory map (if any) of the hardware board to understand about all these different physical memories present in a typical hardware board? How much of a role does the OS play in distinguishing these different memories? Sorry for multiple questions, but i think they all are interlinked.

    Read the article

  • Jquery cycle plugin and image tag from code behind

    - by Geetha
    Hi All, I am using cycle plugin to show images. I am binding the html code for image from code behind. Problem: Images are not getting displayed. If i hard coded the image tag it is working. Code: $(window).load(function() { $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", data: "{}", url: "Default.aspx/GetImage", dataType: "json", success: function(data) { alert(data.d); $("#sample").html(data.d); $('#sample').cycle({ fx: 'fade', continuous: true, speed: 7500, timeout: 55000, pause: 1, sync: 1 }); } }); }); Data.d will give: <img src="Images/Frontbanner/s0.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s111.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s112.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s113.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s114.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s115.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s116.jpg" width="664" height="428" border="0" />

    Read the article

  • Write to memory buffer instead of file with libjpeg?

    - by Richard Knop
    I have found this function which uses libjpeg to write to a file: int write_jpeg_file( char *filename ) { struct jpeg_compress_struct cinfo; struct jpeg_error_mgr jerr; /* this is a pointer to one row of image data */ JSAMPROW row_pointer[1]; FILE *outfile = fopen( filename, "wb" ); if ( !outfile ) { printf("Error opening output jpeg file %s\n!", filename ); return -1; } cinfo.err = jpeg_std_error( &jerr ); jpeg_create_compress(&cinfo); jpeg_stdio_dest(&cinfo, outfile); /* Setting the parameters of the output file here */ cinfo.image_width = width; cinfo.image_height = height; cinfo.input_components = bytes_per_pixel; cinfo.in_color_space = color_space; /* default compression parameters, we shouldn't be worried about these */ jpeg_set_defaults( &cinfo ); /* Now do the compression .. */ jpeg_start_compress( &cinfo, TRUE ); /* like reading a file, this time write one row at a time */ while( cinfo.next_scanline < cinfo.image_height ) { row_pointer[0] = &raw_image[ cinfo.next_scanline * cinfo.image_width * cinfo.input_components]; jpeg_write_scanlines( &cinfo, row_pointer, 1 ); } /* similar to read file, clean up after we're done compressing */ jpeg_finish_compress( &cinfo ); jpeg_destroy_compress( &cinfo ); fclose( outfile ); /* success code is 1! */ return 1; } I would actually need to write the jpeg compressed image just to memory buffer, without saving it to a file, to save time. Could somebody give me an example how to do it? I have been searching the web for a while but the documentation is very rare if any and examples are also difficult to come by.

    Read the article

  • Can someone tell me why I'm seg faulting in this simple C program?

    - by user299648
    I keep on getting seg faulted, and for the life of me I dont why. The file I'm scanning is just 18 strings in 18 lines. I thinks the problem is the way I'm mallocing the double pointer called picks, but I dont know exactly why. I'm am only trying to scanf strings that are less than 15 chars long, so I don't see the problem. Can someone please help. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX_LENGTH 100 int main( int argc,char *argv[] ) { char* string = malloc( sizeof(char) ); char** picks = malloc(15*sizeof(char)); FILE* pick_file = fopen( argv[l], "r" ); int num_picks; for( num_picks=0 ; fgets( string, MAX_LENGTH, pick_file ) != NULL ; num_picks++ ) { printf("pick a/an %s ", string ); scanf( "%s", picks+num_picks ); } int x; for(x=0; x<num_picks;x++) printf("s\n", picks+x); }

    Read the article

  • Multi-reader IPC solution?

    - by gct
    I'm working on a framework in C++ (just for fun for now), that lets the user write plugins that use a standard API to stream data between each other. There's going to be three basic transport mechanisms for the data: files, sockets, and some kind of IPC piping system. The system is set up so that for the non-file transport, each stream can have multiple readers. IE once a server socket it setup, multiple computers can connect and stream the data. I'm a little stuck at the multi-reader IPC system though. All my plugins run in threads so they live in the same address space, so some kind of shared memory system would work fine, I was thinking I'd write my own circular buffer with a write pointer and read pointers chassing it around the buffer, but I have my doubts that I can achieve the same performance as something like linux pipes. I'm curious what people would suggest for a multi-reader solution to something like this? Is the overhead for pipes or domain sockets low enough that I could just open a connection to each reader and issue separate writes to each reader? This is intended to be significant volumes of data (tens of mega-samples/sec), so performance is a must.

    Read the article

  • Calculating and saving space in Postgresql

    - by punkish
    I have a table in Pg like so CREATE TABLE t ( a BIGSERIAL NOT NULL, -- 8 b b SMALLINT, -- 2 b c SMALLINT, -- 2 b d REAL, -- 4 b e REAL, -- 4 b f REAL, -- 4 b g INTEGER, -- 4 b h REAL, -- 4 b i REAL, -- 4 b j SMALLINT, -- 2 b k INTEGER, -- 4 b l INTEGER, -- 4 b m REAL, -- 4 b CONSTRAINT a_pkey PRIMARY KEY (a) ) The above adds up to 50 bytes per row. My experience is that I need another 40% to 50% for system overhead, without even any user-created indexes to the above. So, about 75 bytes per row. I will have many, many rows in the table, potentially upward of 145 billion rows, so the table is going to be pushing 13-14 Terabytes. What tricks, if any, could I use to compact this table? My possible ideas below -- Convert the REAL values to INTEGERs. If they can stored as SMALLINT, that is a saving of 2 bytes per field. Convert the columns b .. m into an array. I don't need to search on those columns, but I do need to be able to return one column's value at a time. So, if I need column g, I could do something like SELECT a, arr[5] FROM t; Would I save space with the array option? Would there be a speed penalty? Any other ideas?

    Read the article

  • Jquery drag /drop and clone

    - by Sajeev
    Hi I need to achive this .. I have a set of droppable items ( basically I am droping designs on a apparel ) and I am dropping a clone.. If I don't like the dropped object (designs) - I want to delete that by doing something like hidden . But I am unable to do that. Please help me.. here is the code var clone; $(document).ready(function(){ $(".items").draggable({helper: 'clone',cursor: 'hand'}); $(".droparea").droppable({ accept: ".items", hoverClass: 'dropareahover', tolerance: 'pointer', drop: function(ev, ui) { var dropElemId = ui.draggable.attr("id"); var dropElem = ui.draggable.html(); clone = $(dropElem).clone(); // clone it and hold onto the jquery object clone.id="newId"; clone.css("position", "absolute"); clone.css("top", ui.absolutePosition.top); clone.css("left", ui.absolutePosition.left); clone.draggable({ containment: 'parent' ,cursor: 'crosshair'}); $(this).append(clone); alert("done dragging "); /lets assume I have a delete button when I click that clone should dissapear so that I can drop another design - but the following code has no effect //and the item is still visible , how to make it dissapear ? $('#newId').css("visibility","hidden"); } }); });

    Read the article

  • Good way to identify similar images?

    - by Nick
    I've developed a simple and fast algorithm in PHP to compare images for similarity. Its fast (~40 per second for 800x600 images) to hash and a unoptimised search algorithm can go through 3,000 images in 22 mins comparing each one against the others (3/sec). The basic overview is you get a image, rescale it to 8x8 and then convert those pixels for HSV. The Hue, Saturation and Value are then truncated to 4 bits and it becomes one big hex string. Comparing images basically walks along two strings, and then adds the differences it finds. If the total number is below 64 then its the same image. Different images are usually around 600 - 800. Below 20 and extremely similar. Are there any improvements upon this model I can use? I havent looked at how relevant the different components (hue, saturation and value) are to the comparison. Hue is probably quite important but the others? To speed up searches I could probably split the 4 bits from each part in half, and put the most significant bits first so if they fail the check then the lsb doesnt need to be checked at all. I dont know a efficient way to store bits like that yet still allow them to be searched and compared easily. I've been using a dataset of 3,000 photos (mostly unique) and there havent been any false positives. Its completely immune to resizes and fairly resistant to brightness and contrast changes.

    Read the article

  • css Checkbox Label Selector

    - by HW90
    I'm developing a MVC3 application and need to select the checkboxes label. In ASP MVC3 you have helper methods which creat a part of the code. So the code for a checkbox looks like this: <input id="Jumping_successleicht" type="checkbox" value="true" name="Jumping_successleicht"> <input type="hidden" value="false" name="Jumping_successleicht"> <label for="Jumping_successleicht"> <span>leicht (4)</span> </label> Now I've thought I can use following code to select the label: input[type=checkbox] + label { background: url("../../Images/Controls/Checkbox.png") no-repeat scroll left center transparent; clear: none; cursor: pointer; margin: 0; padding: 5px 0 4px 24px; } But it does not work. It looks like label and input have to be next to each other. Does any ony have a solution how to solve this problem?

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >