Search Results

Search found 18029 results on 722 pages for 'stripe size'.

Page 593/722 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • Help with Jquery + Masonry Plugin: How to expand/collapse boxes to reveal content

    - by Jam
    I'm using the masonry jquery plugin on a project: (http://desandro.com/resources/jquery-masonry) Basically I have a set of boxes (thumbnails) in a grid. When one is clicked, I want it to expand to a larger size to show more content (additional images and text). I'm struggling with how to make the thumbnail dissappear and then have new content appear in the expanded box. I don't know how to make the new content appear or where to store it on my page--and it needs to have a close button? The creator of the plugin gave me a quick tip for the expanding part, but the code I'm using has a set height and width and I want them to be variable depending on how much content is in the expanded state. Here's my jquery code so far: http://pastie.org/1002101 This is a similar example of the behaviour I want to achieve (except my boxes will have have varying expanded sizes): (http://www.learnsomethingeveryday.co.uk) You'll also notice from that example that it only allows 1 box to be expanded at a time--I would also like to have that functionality. Sorry for all the questions--I'm pretty new to jquery, any help would be amazing!

    Read the article

  • Grails - Recursive computing call "StackOverflowError" and don't update on show

    - by Kedare
    Hello, I have a little problem, I have made a Category domain on Grails, but I want to "cache" the string representation on the database by generating it when the representation field is empty or NULL (to avoid recursive queries at each toString), here is the code : package devkedare class Category { String title; String description; String representation; Date dateCreated; Date lastUpdate; Category parent; static constraints = { title(blank:false, nullable:false, size:2..100); description(blank:true, nullable:false); dateCreated(nullable:true); lastUpdate(nullable:true); autoTimestamp: true; } static hasMany = [childs:Category] @Override String toString() { if((!this.representation) || (this.representation == "")) { this.computeRepresentation(true) } return this.representation; } String computeRepresentation(boolean updateChilds) { if(updateChilds) { for(child in this.childs) { child.representation = child.computeRepresentation(true); } } if(this.parent) { this.representation = "${this.parent.computeRepresentation(false)}/${this.title}"; } else { this.representation = this.title } return this.representation; } void beforeUpdate() { this.computeRepresentation(true); } } There is 2 problems : It don't update the representation when the representation if NULL or empty and toString id called, it only update it on save, how to fix that ? On some category, updating it throw a StackOverflowError, here is an example of my actual category table as CSV : Uploaded the csv file here, pasting csv looks buggy: http://kedare.free.fr/Dist/01.csv Updating the representation works everywhere except on "IT" that throw a StackOverlowError (this is the only category that has many childs). Any idea of how can I fix those 2 problems ? Thank you !

    Read the article

  • array of structs in C

    - by Hristo
    I'm trying to create an array of structs and also a pointer to that array. I don't know how large the array is going to be, so it should be dynamic. My struct would look something like this: typedef struct _stats_t { int hours[24]; int numPostsInHour; int days[7]; int numPostsInDay; int weeks[20]; int numPostsInWeek; int totNumLinesInPosts; int numPostsAnalyzed; } stats_t; ... and I need to have multiple of these structs for each file (unknown amount) that I will analyze. I'm not sure how to do this. I don't like the following approach because of the limit of the size of the array: # define MAX 10 typedef struct _stats_t { int hours[24]; int numPostsInHour; int days[7]; int numPostsInDay; int weeks[20]; int numPostsInWeek; int totNumLinesInPosts; int numPostsAnalyzed; } stats_t[MAX]; So how would I create this array? Also, would a pointer to this array would look something this? stats_t stats[]; stats_t *statsPtr = &stats[0]; Thanks, Hristo

    Read the article

  • Problem cube mapping in OpenGL using DDS compressed images

    - by Paul Jones
    Hi All, I am having trouble cube mapping when using a DDS cube map, I'm just getting a black cube which leads me to believe I have missing something simple, here's the code so far: DDS_IMAGE_DATA *pDDSImageData = LoadDDSFile(filename); //compressedTexture = -1; if(pDDSImageData != NULL) { int height = pDDSImageData->height; int width = pDDSImageData->width; int numMipMaps = pDDSImageData->numMipMaps; int blockSize; GLenum cubefaces[6] = { GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, }; if( pDDSImageData->format == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT ) blockSize = 8; else blockSize = 16; glGenTextures( 1, &textureId ); int nSize; int nOffset = 0; glEnable(GL_TEXTURE_CUBE_MAP); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for(int face = 0; face < 6; face++) { for( int i = 0; i < numMipMaps; i++ ) { if( width == 0 ) width = 1; if( height == 0 ) height = 1; glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_GENERATE_MIPMAP, GL_TRUE); nSize = ((width+3)>>2) * ((height+3)>>2) * blockSize; glCompressedTexImage2D(cubefaces[face] , i, pDDSImageData->format, width, height, 0, nSize, pDDSImageData->pixels + nOffset ); nOffset += nSize; // Half the image size for the next mip-map level... width = (width / 2); height = (height / 2); } } } Once this code is called I bind the texture using glBindTexture and draw a cube using GL_QUADS and glTexCoord3f. Thank you for reading and all comments are welcome, sorry if any of the formatting comes out wrong.

    Read the article

  • Cross-browser padding/margins

    - by Lucifer
    Hi All, I was wondering if you could give me some helpful hints on how to correct this issue? I have a main menu on my site, the code for it is as follows: li:hover { background-color: #222222; padding-top: 8px; padding-bottom: 9px; } And here's a demo of what it actually looks like: The problem is that when I hover over a menu option (li), the background appears, but it overflows to the outside of the menu's background, and makes it look really dodgy/crap/cheap/yuck! Note that (obviously) when I change the padding to make it display correctly in these browsers, it appears too small in height in IE! So I'm screwed either way. How can I make little imperfections like this look the same in all browsers? Update: HTML (The menu): <ul class="menu"> <li class="currentPage" href="index.php"><a>Home</a></li> <li><a href="services.php">Services</a></li> <li><a href="support.php">Support</a></li> <li><a href="contact.php">Contact Us</a></li> <li><a href="myaccount/" class="myaccount">My Account</a></li> </ul> The CSS: .menu { margin-top: 5px; margin-right: 5px; width: 345px; float: right; } li { font-size: 9pt; color: whitesmoke; padding-left: 6px; padding-right: 8px; display: inline; list-style: none; } li:hover { background-color: #222222; padding-top: 8px; padding-bottom: 9px; }

    Read the article

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • Database Design Question regaurding duplicate information.

    - by galford13x
    I have a database that contains a history of product sales. For example the following table CREATE TABLE SalesHistoryTable ( OrderID, // Order Number Unique to all orders ProductID, // Product ID can be used as a Key to look up product info in another table Price, // Price of the product per unit at the time of the order Quantity, // quantity of the product for the order Total, // total cost of the order for the product. (Price * Quantity) Date, // Date of the order StoreID, // The store that created the Order PRIMARY KEY(OrderID)); The table will eventually have millions of transactions. From this, profiles can be created for products in different geographical regions (based on the StoreID). Creating these profiles can be very time consuming as a database query. For example. SELECT ProductID, StoreID, SUM(Total) AS Total, SUM(Quantity) QTY, SUM(Total)/SUM(Quantity) AS AvgPrice FROM SalesHistoryTable GROUP BY ProductID, StoreID; The above query could be used to get the Information based on products for any particular store. You could then determine which store has sold the most, has made the most money, and on average sells for the most/least. This would be very costly to use as a normal query run anytime. What are some design descisions in order to allow these types of queries to run faster assuming storage size isn’t an issue. For example, I could create another Table with duplicate information. Store ID (Key), Product ID, TotalCost, QTY, AvgPrice And provide a trigger so that when a new order is received, the entry for that store is updated in a new table. The cost for the update is almost nothing. What should be considered when given the above scenario?

    Read the article

  • My First F# program

    - by sudaly
    Hi I just finish writing my first F# program. Functionality wise the code works the way I wanted, but not sure if the code is efficient. I would much appreciate if someone could review the code for me and point out the areas where the code can be improved. Thanks Sudaly open System open System.IO open System.IO.Pipes open System.Text open System.Collections.Generic open System.Runtime.Serialization [<DataContract>] type Quote = { [<field: DataMember(Name="securityIdentifier") >] RicCode:string [<field: DataMember(Name="madeOn") >] MadeOn:DateTime [<field: DataMember(Name="closePrice") >] Price:float } let m_cache = new Dictionary<string, Quote>() let ParseQuoteString (quoteString:string) = let data = Encoding.Unicode.GetBytes(quoteString) let stream = new MemoryStream() stream.Write(data, 0, data.Length); stream.Position <- 0L let ser = Json.DataContractJsonSerializer(typeof<Quote array>) let results:Quote array = ser.ReadObject(stream) :?> Quote array results let RefreshCache quoteList = m_cache.Clear() quoteList |> Array.iter(fun result->m_cache.Add(result.RicCode, result)) let EstablishConnection() = let pipeServer = new NamedPipeServerStream("testpipe", PipeDirection.InOut, 4) let mutable sr = null printfn "[F#] NamedPipeServerStream thread created, Wait for a client to connect" pipeServer.WaitForConnection() printfn "[F#] Client connected." try // Stream for the request. sr <- new StreamReader(pipeServer) with | _ as e -> printfn "[F#]ERROR: %s" e.Message sr while true do let sr = EstablishConnection() // Read request from the stream. printfn "[F#] Ready to Receive data" sr.ReadLine() |> ParseQuoteString |> RefreshCache printfn "[F#]Quot Size, %d" m_cache.Count let quot = m_cache.["MSFT.OQ"] printfn "[F#]RIC: %s" quot.RicCode printfn "[F#]MadeOn: %s" (String.Format("{0:T}",quot.MadeOn)) printfn "[F#]Price: %f" quot.Price

    Read the article

  • Open source gravatar-like implementations?

    - by Tauren
    I'm already using gravatar icons for the users of my web service. However, I'm finding several problems with this approach: Only a small percentage of the users take the time to set up a gravatar profile. My users are not tech-savvy, but would be likely to add a dedicated photo to my site. Users of my service are encouraged to use images that depict them in proper uniform for the industry my service relates to. They wouldn't want that same picture to be used for personal purposes throughout the internet. They would not take the time or effort to manage a separate email address and gravatar account just to have an "in-uniform" profile photo for my service. Before I implement my own profile image feature, I was wondering if there are any open-source solutions that I could leverage with similar features to gravatar. Specifically: The ability to display any size thumbnail (up to 512px would be fine) Takes care of caching different sized thumbnails Has support for something like identicons, preferably pluggable with different style algorithms (monsters, etc.), even better if I can customize these Ability to fall-back to gravatar if no photo found Does anything like this exist? I haven't found it yet if it does.

    Read the article

  • Can't read some attributes with SAX

    - by akappa
    Hi all, I'm trying to parse that document with SAX: <scxml version="1.0" initialstate="start" name="calc"> <datamodel> <data id="expr" expr="0" /> <data id="res" expr="0" /> </datamodel> <state id="start"> <transition event="OPER" target="opEntered" /> <transition event="DIGIT" target="operand" /> </state> <state id="operand"> <transition event="OPER" target="opEntered" /> <transition event="DIGIT" /> </state> </scxml> I read all the attributes well, except "initialstate" and "name"... I get the attributes with the startElement handler, but the size of the attribute list for scxml is zero. Why? How I can overcome that problem? Edit: public void startElement(String uri, String localName, String qName, Attributes attributes){ System.out.println(attributes.getValue("initialstate")); System.out.println(attributes.getValue("name")); } that, when parsing the first tag, doesn't work (prints "null" two times). In fact, attributes.getLength(); evaluates to zero. Thanks

    Read the article

  • Encoding h.264 with libavcodec/x264

    - by Leviathan
    I am attempting to encode video using libavcodec/libavformat. I'm trying to change the standard output-example.c from ffmpeg source. The AVI file is created on the disk, but the only sound is encoded. I tried adding a lot of options for x264 from here. All the other codecs works fine, mpeg2, mpeg4, mjpeg, xvid. In addition to specifying the parameters x264, I also set the codec to AVOutputFormat structure. That's all I've done. AVOutputFormat *pOutFormat; // in header file av_register_all(); AVCodec *codec = avcodec_find_encoder_by_name("libx264"); pOutFormat = guess_format("avi", NULL, NULL); pOutFormat->video_codec = codec->id; The debug output of my application: Output #0, mp4, to 'D:\1.avi': Stream #0.0: Video: libx264, yuv420p, 320x240, q=10-51, 500 kb/s, 90k tbn, 25 tbc Stream #0.1: Audio: aac, 44100 Hz, 1 channels, s16, 128 kb/s [libx264 @ 0x694010]using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x694010]bitrate tolerance too small, using .01 [libx264 @ 0x694010]profile Main, level 2.0 [libx264 @ 0x694010]frame I:150 Avg QP:14.76 size: 2534 [libx264 @ 0x694010]mb I I16..4: 75.9% 0.0% 24.1% [libx264 @ 0x694010]final ratefactor: 17.57 [libx264 @ 0x694010]coded y,uvDC,uvAC intra: 42.7% 92.4% 47.4% [libx264 @ 0x694010]i16 v,h,dc,p: 11% 14% 2% 73% [libx264 @ 0x694010]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 18% 29% 5% 8% 10% 3% 3% 2% [libx264 @ 0x694010]kb/s:506.79

    Read the article

  • UITableViewCells of different heights place their accessoryViews at different X positions

    - by Jasarien
    Hey Guys, My app has some table cells that vary in height. The cells can also have a UIButton set to be a detail disclosure button (round, blue with arrow) as their accessory view. Depending on the height of the cell, the accessory view is positioned differently. At first I thought it was my layout code for my cell that was causing the problem, so I set up a quick independent test that uses vanilla UITableCells to remove the possibility that it could be my fault. I set up a view in interface builder, and just added a view table cells to the view, set their heights to different values and then added a detail disclosure button to each. Nothing more, nothing less. This is what I see: I added the size guides (thanks to Xscope) so you can see the difference in the accessory view x positions. The heights are: top 37px mid 68px bottom 44px (default, untouched height) If I increase the height any heigher than 68px the accessory view doesn't move any further to the left. Is this a bug? Is there any way I can prevent this from happening?

    Read the article

  • javascript table sorting/paging (client-side). How big is too big?

    - by Aheho
    I'm using a jQuery plugin called Tablesorter to do client-side sorting of a log table in one of my applications. I am also making use of the tablepager add-in. I really like the responsiveness that client-side sorting and paging brings to the party. I also like how you don't have to hit the web server or database repeatedly. However I can see that, in time, the log I'm displaying could grow quite large. I'm sure there comes a point where client-side paging and sorting is going to be impractical. What point will this technique begin to collapse under it's own weight? 500 records? 2000 records? 10,000 records? EDIT: In nutshell, what criteria would you use to determine if you are going to use client-side sorting/paging as opposed to server-side paging? Does the size of expected result set factor into your decision? Where is the tipping point?

    Read the article

  • Binary search in a sorted (memory-mapped ?) file in Java

    - by sds
    I am struggling to port a Perl program to Java, and learning Java as I go. A central component of the original program is a Perl module that does string prefix lookups in a +500 GB sorted text file using binary search (essentially, "seek" to a byte offset in the middle of the file, backtrack to nearest newline, compare line prefix with the search string, "seek" to half/double that byte offset, repeat until found...) I have experimented with several database solutions but found that nothing beats this in sheer lookup speed with data sets of this size. Do you know of any existing Java library that implements such functionality? Failing that, could you point me to some idiomatic example code that does random access reads in text files? Alternatively, I am not familiar with the new (?) Java I/O libraries but would it be an option to memory-map the 500 GB text file (I'm on a 64-bit machine with memory to spare) and do binary search on the memory-mapped byte array? I would be very interested to hear any experiences you have to share about this and similar problems.

    Read the article

  • How do I read binary C++ protobuf data using Python protobuf?

    - by nbolton
    The Python version of Google protobuf gives us only: SerializeAsString() Where as the C++ version gives us both: SerializeToArray(...) SerializeAsString() We're writing to our C++ file in binary format, and we'd like to keep it this way. That said, is there a way of reading the binary data into Python and parsing it as if it were a string? Is this the correct way of doing it? binary = get_binary_data() binary_size = get_binary_size() string = None for i in range(len(binary_size)): string += i message = new MyMessage() message.ParseFromString(string) Update: Here's a new example, and a problem: message_length = 512 file = open('foobars.bin', 'rb') eof = False while not eof: data = file.read(message_length) eof = not data if not eof: foo_bar = FooBar() foo_bar.ParseFromString(data) When we get to the foo_bar.ParseFromString(data) line, I get this error: Exception Type: DecodeError Exception Value: Too many bytes when decoding varint. Update 2: It turns out, that the padding on the binary data was throwing protobuf off; too many bytes were being sent in, as the message suggests (in this case it was referring to the padding). This padding comes from using the C++ protobuf function, SerializeToArray on a fixed-length buffer. To eliminate this, I have used this temproary code: message_length = 512 file = open('foobars.bin', 'rb') eof = False while not eof: data = file.read(message_length) eof = not data string = '' for i in range(0, len(data)): byte = data[i] if byte != '\xcc': # yuck! string += data[i] if not eof: foo_bar = FooBar() foo_bar.ParseFromString(string) There is a design flaw here I think. I will re-implement my C++ code so that it writes variable length arrays to the binary file. As advised by the protobuf documentation, I will prefix each message with it's binary size so that I know how much to read when I'm opening the file with Python.

    Read the article

  • Most efficient way to send images across processes

    - by Heinrich Ulbricht
    Goal Pass images generated by one process efficiently and at very high speed to another process. The two processes run on the same machine and on the same desktop. The operating system may be WinXP, Vista and Win7. Detailled description The first process is solely for controlling the communication with a device which produces the images. These images are about 500x300px in size and may be updated up to several hundred times per second. The second process needs these images to display them. The first process uses a third party API to paint the images from the device to a HDC. This HDC has to be provided by me. Note: There is already a connection open between the two processes. They are communicating via anonymous pipes and share memory mapped file views. Thoughts How would I achieve this goal with as little work as possible? And I mean both work for me and the computer. I am using Delphi, so maybe there is some component available for doing this? I think I could always paint to any image component's HDC, save the content to memory stream, copy the contents via the memory mapped file, unpack it on the other side and paint it there to the destination HDC. I also read about a IPicture interface which can be used to marshall images. What are your ideas? I appreciate every thought on this!

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • jQuery Validation error...

    - by Povylas
    Hi, I have been struggling with this jQuery Validation Plugin. Here is the code: <script type="text/javascript"> $(function() { var validator = $('#signup').validate({ errorElement: 'span', rules: { username: { required: true, minlenght: 6 //remote: "check-username.php" }, password: { required: true, minlength: 5 }, confirm_password: { required: true, minlength: 5, equalTo: "#password" }, email: { required: true, email: true }, agree: "required" }, messages: { username: { required: "Please enter a username", minlength: "Your username must consist of at least 6 characters" //remote: "Somenoe have already chosen nick like this." }, password: { required: "Please provide a password", minlength: "Your password must be at least 5 characters long" }, confirm_password: { required: "Please provide a password", minlength: "Your password must be at least 5 characters long", equalTo: "Please enter the same password as above" }, email: "Please enter a valid email address", agree: "Please accept our policy" } }); var root = $("#wizard").scrollable({size: 1, clickable: false}); // some variables that we need var api = root.scrollable(); $("#data").click(function() { validator.form(); }); // validation logic is done inside the onBeforeSeek callback api.onBeforeSeek(function(event, i) { if($("#signup").valid() == false){ return false; }else{ return true; } $("#status li").removeClass("active").eq(i).addClass("active"); }); //if tab is pressed on the next button seek to next page root.find("button.next").keydown(function(e) { if (e.keyCode == 9) { // seeks to next tab by executing our validation routine api.next(); e.preventDefault(); } }); $('button.fin').click(function(){ parent.$.fn.fancybox.close() }); }); </script> And here is the error: $.validator.methods[method] is undefined http://www.vvv.vhost.lt/js/jquery-validate/jquery.validate.min.js Line 15 I am completely confused... Maybe some kind of handler is needed? I would be grateful for any kind of answer.

    Read the article

  • Save QList<int> to QSettings

    - by Tobias
    Hello, I want to save a QList<int> to my QSettings without looping through it. I know that I could use writeArray() and a loop to save all items or to write the QList to a QByteArray and save this but then it is not human readable in my INI file.. Currently I am using the following to transform my QList<int> to QList<QVariant>: QList<QVariant> variantList; //Temp is the QList<int> for (int i = 0; i < temp.size(); i++) variantList.append(temp.at(i)); And to save this QList<Variant> to my Settings I use the following code: QVariant list; list.setValue(variantList); //saveSession is my QSettings object saveSession.setValue("MyList", list); The QList is correctly saved to my INI file as I can see (comma seperated list of my ints) But the function crashes on exit. I already tried to use a pointer to my QSettings object instead but then it crashes on deleting the pointer ..

    Read the article

  • Does NHibernate LINQ support ToLower() in Where() clauses?

    - by Daniel T.
    I have an entity and its mapping: public class Test { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual string Description { get; set; } } public class TestMap : EntityMap<Test> { public TestMap() { Id(x => x.Id); Map(x => x.Name); Map(x => x.Description); } } I'm trying to run a query on it (to grab it out of the database): var keyword = "test" // this is coming in from the user keyword = keyword.ToLower(); // convert it to all lower-case var results = session.Linq<Test> .Where(x => x.Name.ToLower().Contains(keyword)); results.Count(); // execute the query However, whenever I run this query, I get the following exception: Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index Am I right when I say that, currently, Linq to NHibernate does not support ToLower()? And if so, is there an alternative that allows me to search for a string in the middle of another string that Linq to NHibernate is compatible with? For example, if the user searches for kap, I need it to match Kapiolani, Makapuu, and Lapkap.

    Read the article

  • Buttons OnClick Event not firing when it causes a textboxes onChange event to fire first

    - by user48408
    I have a few textboxes and button to save their values on a webpage. The onchange event of the textboxes fires some js which adds the changed text to a js array. The ok button when clicked flushes this to the database via a webservice. This works fine except when the onchange event is caused by clicking the ok button. In this scenario the onchange of the textboxes still fires but the onClick event of the button does not. Any ideas? textboxes look something like <input name="ctrlJPView$tbcTabContainer$Details$JP_Details_Address2Text" type="text" value="test" id="ctrlJPView_tbcTabContainer_Details_JP_Details_Address2Text" onchange="addSaveDetails('Jobs###' + document.getElementById('ctrlJPView_tbcTabContainer_Details_JP_Details_Address2Text').value + ');" style="font-size:8pt;Left:110px;Top:29px;Width:420px;Height:13px;Position:absolute;" /> My save button <input type="button" name="ctrlJPView$btnOk" value="OK" onclick="saveAmendments();refreshJobGrids();return false;__doPostBack('ctrlJPView$btnOk','')" id="ctrlJPView_btnOk" class="ControlText" style="width:60px;" /> UPDATE: I guess this comes down to one of two things. 1) Something is happening before the onClick of the button gets called to surpress that call such as an inadvertent return false; or 2) the onClick event isn't firing at all. Now I've rem'd out everything actually inside the functions that are being called beforehand but the problem persists. But if i remove the call altogether it works (???)

    Read the article

  • how to convert bitmap into byte array in android

    - by satyamurthy
    hi all i am new in android i am implementing image retrieve in sdcard in image convert into bitmap and in bitmap convert in to byte array please forward some solution of this code public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); ImageView image = (ImageView) findViewById(R.id.picview); EditText value=(EditText)findViewById(R.id.EditText01); FileInputStream in; BufferedInputStream buf; try { in = new FileInputStream("/sdcard/pictures/1.jpg"); buf = new BufferedInputStream(in,1070); System.out.println("1.................."+buf); byte[] bMapArray= new byte[buf.available()]; buf.read(bMapArray); Bitmap bMap = BitmapFactory.decodeByteArray(bMapArray, 0, bMapArray.length); for (int i = 0; i < bMapArray.length; i++) { System.out.print("bytearray"+bMapArray[i]); } image.setImageBitmap(bMap); value.setText(bMapArray.toString()); if (in != null) { in.close(); } if (buf != null) { buf.close(); } } catch (Exception e) { Log.e("Error reading file", e.toString()); } } } solution is 04-12 16:41:16.168: INFO/System.out(728): 4......................[B@435a2908 this is the result for byte array not display total byte array this array size is 1034 please forward some solution

    Read the article

  • BlackBerry:Reduce space between 2 buttons in HorizontalFieldManager

    - by user469999
    hi In blackberry i have created a horizontal field managar and added some buttons of small size to it to display toolbar at the bottom of the screen.But my problem is there is too much space between the 2 buttons.I have to reduce this space between 2 buttons so that i can manage to place atleast 6 buttons at the bottom of the screen.I am using the BFmsg.setMargin(305,0,0,40) statement. can anyone please help me on this. Following is my code : BFcontacts = new ButtonField("Cnt") { protected void paint(Graphics graphics) { //Bitmap contactsbitmap = Bitmap.getBitmapResource("contacts.jpg"); //graphics.drawBitmap(0, 0, contactsbitmap.getWidth(), contactsbitmap.getHeight(), contactsbitmap, 0, 0); graphics.setColor(Color.WHITE); graphics.drawText("Cnt",0,0); } }; BFcontacts.setMargin(305,0,0,10);//vertical pos,0,0,horizontal pos HFM.add(BFcontacts); BFmsg = new ButtonField("Msgs") { protected void paint(Graphics graphics) { //Bitmap msgsbitmap = Bitmap.getBitmapResource("messages.jpg"); //graphics.drawBitmap(0, 0, msgsbitmap.getWidth(), msgsbitmap.getHeight(), msgsbitmap, 0, 0); graphics.setColor(Color.WHITE); graphics.drawText("Msgs",0,0); } }; BFmsg.setMargin(305,0,0,40);//vertical pos,0,0,horizontal pos : original HFM.add(BFmsg); add(HFM) Thanks in advance

    Read the article

  • Execute code on assembly load

    - by Dmitriy Matveev
    I'm working on wrapper for some huge unmanaged library. Almost every of it's functions can call some error handler deeply inside. The default error handler writes error to console and calls abort() function. This behavior is undesirable for managed library, so I want to replace the default error handler with my own which will just throw some exception and let program continue normal execution after handling of this exception. The error handler must be changed before any of the wrapped functions will be called. The wrapper library is written in managed c++ with static linkage to wrapped library, so nothing like "a type with hundreds of dll imports" is present. I also can't find a single type which is used by everything inside wrapper library. So I can't solve that problem by defining static constructor in one single type which will execute code I need. I currently see two ways of solving that problem: Define some static method like Library.Initialize() which must be called one time by client before his code will use any part of the wrapper library. Find the most minimal subset of types which is used by every top-level function (I think the size of this subset will be something like 25-50 types) and add static constructors calling Library.Initialize (which will be internal in that scenario) to every of these types. I've read this and this questions, but they didn't helped me. Is there any proper ways of solving that problem? Maybe some nice hacks available?

    Read the article

  • LINQ to SQL and DataPager

    - by Jonathan S.
    I'm using LINQ to SQL to search a fairly large database and am unsure of the best approach to perform paging with a DataPager. I am aware of the Skip() and Take() methods and have those working properly. However, I'm unable to use the count of the results for the datapager, as they will always be the page size as determined in the Take() method. For example: var result = (from c in db.Customers where c.FirstName == "JimBob" select c).Skip(0).Take(10); This query will always return 10 or fewer results, even if there are 1000 JimBobs. As a result, the DataPager will always think there's a single page, and users aren't able to navigate across the entire result set. I've seen one online article where the author just wrote another query to get the total count and called that. Something like: int resultCount = (from c in db.Customers where c.FirstName == "JimBob" select c).Count(); and used that value for the DataPager. But I'd really rather not have to copy and paste every query into a separate call where I want to page the results for obvious reasons. Is there an easier way to do this that can be reused across multiple queries? Thanks.

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >