Search Results

Search found 16987 results on 680 pages for 'second'.

Page 537/680 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • Problem with displaying image from DB using asp.net MVC 2

    - by Renato
    Hello! I'm having trouble with displaying image from db. I think that method for saving image is working, because I see in DB that varbinary fields have some value, and that image type is correct. Also the size of image. But when I want to display image for product i don't get anything. Just blank space.. here is my code... public byte[] GetImage(int productID) { Product product = products.GetByID(productID); byte[] imageData = product.ImageData.ToArray(); return imageData; } That code is in my controller. Second is code from view : <% if (product.ImageData != null) { %> <img src='/product/getimage/<%= Html.Encode(product.ID) %>' alt=""/> <% } %> I tried some solutions found here on stack overflow, and everyone do it like this, but it's working for them. I dont have any idea why images aren't displayed. When i look at source code of page at debugging i have : <img src='/product/getimage/18' alt=""/> I'm using .net 4.0, MVC 2, VS 2010... Thanks in advance

    Read the article

  • Got Hacked. Want to understand how.

    - by gaoshan88
    Someone has, for the second time, appended a chunk of javascript to a site I help run. This javascript hijacks Google adsense, inserting their own account number, and sticking ads all over. The code is always appended, always in one specific directory (one used by a third party ad program), affects a number of files in a number of directories inside this one ad dir (20 or so) and is inserted at roughly the same overnight time. The adsense account belongs to a Chinese website (located in a town not an hour from where I will be in China next month. Maybe I should go bust heads... kidding, sort of), btw. So, how could they append text to these files? Is it related to the permissions set on the files (ranging from 755 to 644)? To the webserver user (it's on MediaTemple so it should be secure, yes?)? I mean, if you have a file that has permissions set to 777 I still can't just add code to it at will... how might they be doing this?

    Read the article

  • C++ Check Substring of a String

    - by user69514
    I'm trying to check whether or not the second argument in my program is a substring of the first argument. The problem is that it only work if the substring starts with the same letter of the string. .i.e Michigan - Mich (this works) Michigan - Mi (this works) Michigan - igan (this doesn't work) #include <stdio.h> #include <string.h> #include <string> using namespace std; bool my_strstr( string str, string sub ) { bool flag = true; int startPosition = -1; char subStart = str.at(0); char strStart; //find starting position for(int i=0; i<str.length(); i++){ if(str.at(i) == subStart){ startPosition = i; break; } } for(int i=0; i<sub.size(); i++){ if(sub.at(i) != str.at(startPosition)){ flag = false; break; } startPosition++; } return flag; } int main(int argc, char **argv){ if (argc != 3) { printf ("Usage: check <string one> <string two>\n"); } string str1 = argv[1]; string str2 = argv[2]; bool result = my_strstr(str1, str2); if(result == 1){ printf("%s is a substring of %s\n", argv[2], argv[1]); } else{ printf("%s is not a substring of %s\n", argv[2], argv[1]); } return 0; }

    Read the article

  • Using Mergesort to calculate number of inversions in C++

    - by Brown
    void MergeSort(int A[], int n, int B[], int C[]) { if(n > 1) { Copy(A,0,floor(n/2),B,0,floor(n/2)); Copy(A,floor(n/2),n-1,C,0,floor(n/2)-1); MergeSort(B,floor(n/2),B,C); MergeSort(C,floor(n/2),B,C); Merge(A,B,0,floor(n/2),C,0,floor(n/2)-1); } }; void Copy(int A[], int startIndexA, int endIndexA, int B[], int startIndexB, int endIndexB) { while(startIndexA < endIndexA && startIndexB < endIndexB) { B[startIndexB]=A[startIndexA]; startIndexA++; startIndexB++; } }; void Merge(int A[], int B[],int leftp, int rightp, int C[], int leftq, int rightq) //Here each sub array (B and C) have both left and right indices variables (B is an array with p elements and C is an element with q elements) { int i=0; int j=0; int k=0; while(i < rightp && j < rightq) { if(B[i] <=C[j]) { A[k]=B[i]; i++; } else { A[k]=C[j]; j++; inversions+=(rightp-leftp); //when placing an element from the right array, the number of inversions is the number of elements still in the left sub array. } k++; } if(i=rightp) Copy(A,k,rightp+rightq,C,j,rightq); else Copy(A,k,rightp+rightq,B,i,rightp); } I am specifically confused on the effect of the second 'B' and 'C' arguments in the MergeSort calls. I need them in there so I have access to them for Copy and and Merge, but

    Read the article

  • Python: Most efficient way to concatenate and rearrange files

    - by user300890
    Hi, I am reading from several files, each file is divided into 2 pieces, first a header section of a few thousand lines followed by a body of a few thousand. My problem is I need to concatenate these files into one file where all the headers are on the top followed by the body. Currently I am using two loops; one to pull out all the headers and write them, and the second to write the body of each file (I also include a tmp_count variable to limit the number of lines to be loading into memory before dumping to file). This is pretty slow - about 6min for 13gb file. Can anyone tell me how to optimize this or if there is a faster way to do this in python ? Thanks! Here is my code: def cat_files_sam(final_file_name,work_directory_master,file_count): final_file = open(final_file_name,"w") if len(file_count) > 1: file_count=sort_output_files(file_count) # only for @ headers for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 else: final_file.writelines(tmp_list) break for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): continue if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 final_file.writelines(tmp_list) final_file.close()

    Read the article

  • Unit testing, mocking - simple case: Service - Repository

    - by rafek
    Consider a following chunk of service: public class ProductService : IProductService { private IProductRepository _productRepository; // Some initlization stuff public Product GetProduct(int id) { try { return _productRepository.GetProduct(id); } catch (Exception e) { // log, wrap then throw } } } Let's consider a simple unit test: [Test] public void GetProduct_return_the_same_product_as_getProduct_on_productRepository() { var product = EntityGenerator.Product(); _productRepositoryMock.Setup(pr => pr.GetProduct(product.Id)).Returns(product); Product returnedProduct = _productService.GetProduct(product.Id); Assert.AreEqual(product, returnedProduct); _productRepositoryMock.VerifyAll(); } At first it seems that this test is ok. But let's change our service method a little bit: public Product GetProduct(int id) { try { var product = _productRepository.GetProduct(id); product.Owner = "totallyDifferentOwner"; return product; } catch (Exception e) { // log, wrap then throw } } How to rewrite a given test that it'd pass with the first service method and fail with a second one? How do you handle this kind of simple scenarios? HINT: A given test is bad coz product and returnedProduct is actually the same reference.

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • Single log file for multiple webapps

    - by Ashish Aggarwal
    In my tomcat there are multiple webapps deployed and they communicate with each other. Currently they all have their own log file. But when there is some issue comes from call I have to 1st check with the app to whom I made a call and check log file of respective apps involved in the call. So I want that, as all apps is deployed in same tomcat and sharing a common log4j, if a call made to any app then all logs should be in a single log file and no matters how my webapps are involved all error comes from any webapp during the call should be in a single log file. I have no idea how can I achieve this. So any help is appreciable. Edited: I think my question is not cleared so updated with use case: I have three webapps A, B, C having logs files as A.log, B.log and C.log. I made two calls. 1st one to A (that internally calls C) and 2nd to B (that internally calls C). Now logging of first call must be in A.log (with the logs of every step performed inside the webapp c) and second call must be in B.log (with the logs of every step performed inside the webapp c).

    Read the article

  • emacs lisp mapcar doesn't apply function to all elements?

    - by Stephen
    Hi, I have a function that takes a list and replaces some elements. I have constructed it as a closure so that the free variable cannot be modified outside of the function. (defun transform (elems) (lexical-let ( (elems elems) ) (lambda (seq) (let (e) (while (setq e (car elems)) (setf (nth e seq) e) (setq elems (cdr elems))) seq)))) I call this on a list of lists. (defun tester (seq-list) (let ( (elems '(1 3 5)) ) (mapcar (transform elems) seq-list))) => ((10 1 8 3 6 5 4 3 2 1) ("a" "b" "c" "d" "e" "f")) It does not seem to apply the function to the second element of the list provided to tester(). However, if I explicitly apply this function to the individual elements, it works... (defun tester (seq-list) (let ( (elems '(1 3 5)) ) (list (funcall (transform elems) (car seq-list)) (funcall (transform elems) (cadr seq-list))))) => ((10 1 8 3 6 5 4 3 2 1) ("a" 1 "c" 3 "e" 5)) If I write a simple function using the same concepts as above, mapcar seems to work... What could I be doing wrong? (defun transform (x) (lexical-let ( (x x) ) (lambda (y) (+ x y)))) (defun tester (seq) (let ( (x 1) ) (mapcar (transform x) seq))) (tester (list 1 3)) => (2 4) Thanks

    Read the article

  • How to send Event signal through Processes - C

    - by Jamie Keeling
    Hello all! I have an application consisting of two windows, one communicates to the other and sends it a struct constaining two integers (In this case two rolls of a dice). I will be using events for the following circumstances: Process a sends data to process b, process b displays data Process a closes, in turn closing process b Process b closes a, in turn closing process a I have noticed that if the second process is constantly waiting for the first process to send data then the program will be just sat waiting, which is where the idea of implementing threads on each process occurred and I have started to implement this already. The problem i'm having is that I don't exactly have a lot of experience with threads and events so I'm not sure of the best way to actually implement what I want to do. I'm trying to work out how the other process will know of the event being fired so it can do the tasks it needs to do, I don't understand how one process that is separate from another can tell what the states the events are in especially as it needs to act as soon as the event has changed state. Thanks for any help Edit: I can only use the Create/Set/Open methods for events, sorry for not mentioning it earlier.

    Read the article

  • Retrieve values from multimdimensional array

    - by vincentlerry
    I have a great difficulty. I need to retrieve [title], [url] and [abstract] values ??from this multidimensional array. Also, I have to store those values in mysql database. thanks in advance!!! Array ( [bossresponse] = Array ( [responsecode] = 200 [limitedweb] = Array ( [start] = 0 [count] = 20 [totalresults] = 972000 [results] = Array ( [0] = Array ( [date] = [clickurl] = http://www.torchlake.com/ [url] = http://www.torchlake.com/ [dispurl] = www.torchlake.com [title] = Torch Lake, COLI Inc, Highspeed, Dial-up, Wireless ... [abstract] = Welcome to COLI Inc. Chain O' Lake Internet. Local Northern Michigan ISP, offering Dialup Internet access, Wireless access, Web design, and T1 services in Northern ... ) [1] = Array ( [date] = [clickurl] = http://en.wikipedia.org/wiki/Torch_Lake_(Antrim_County,_Michigan) [url] = http://en.wikipedia.org/wiki/Torch_Lake_(Antrim_County,_Michigan) [dispurl] = en.wikipedia.org/wiki/Torch_Lake_(Antrim_County,_Michigan) [title] = Torch Lake (Antrim County, Michigan) - Wikipedia, the free ... [abstract] = Torch Lake at 19 miles (31 km) long is Michigan's longest lake and at approximately 18,770 acres (76 km²) is Michigan's second largest lake. Within it are several ... ) this is the entire code that generates this array: require("OAuth.php"); $cc_key = ""; $cc_secret = ""; $url = "http://yboss.yahooapis.com/ysearch/limitedweb"; $args = array(); $args["q"] = "car"; $args["format"] = "json"; $args["count"] = 20; $consumer = new OAuthConsumer($cc_key, $cc_secret); $request = OAuthRequest::from_consumer_and_token($consumer, NULL,"GET", $url, $args); $request-sign_request(new OAuthSignatureMethod_HMAC_SHA1(), $consumer, NULL); $url = sprintf("%s?%s", $url, OAuthUtil::build_http_query($args)); $ch = curl_init(); $headers = array($request-to_header()); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); $rsp = curl_exec($ch); $results = json_decode($rsp, true);

    Read the article

  • How to force div to appear below not next to another?

    - by Vafello
    I would like to place my div below the list, but actually it is placed next to the list.The list is generated dynamically, so it doesn't have a fixed hight. I would like to have the map div on the right, and on the left (next to the map) the list placed on top and the second div below the list. #map { float:left; width:700px; height:500px; } #list { float:left; width:200px; background:#eee; list-style:none; padding:0; } #similar { float:left; width:200px; background:#000; } <div id="map"></div> <ul id="list"></ul> <div id ="similar"> this text should be below, not next to ul. </div> Any ideas?

    Read the article

  • Ways to update a dependent table in the same MySQL transaction?

    - by codie
    I need to update two tables inside a single transaction. The individual queries look something like this: 1. INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; If the above query causes an insert then I need to run the following statement on the second table: 2. INSERT INTO t2 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = col2 + val2; otherwise, 3. UPDATE t2 SET col2 = col2 - old_val2 + val2 WHERE col1 = val1; -- old_val2 is the value of t1.col2 before it was updated Right now I run a SELECT on t1 first, to determine whether statement 1 will cause an insert or update on t1. Then I run statement 1 and either of 2 and 3 inside a transaction. What are the ways in which I can do all of these inside one transaction itself? The approach I was thinking of is the following: UPDATE t2, t1 set t2.col2 = t2.col2 - t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; INSERT INTO t2, t1 (t2.col1, t2.col2) VALUES (t1.col1, t1.col2) ON DUPLICATE KEY UPDATE t2.col2 = t2.col2 + t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; Unfortunately, there's no multi-table INSERT... ON DUPLICATE KEY UPDATE in MySQL 5.0. What else could I do?

    Read the article

  • Managed bean property value not set to null

    - by Vladimir
    Hi! I'm new to JSF, so this question might be strange. I have an inputText component's value bound to managed bean's property of type Float. I need to set property to null when inputText field is empty, not to 0 value. It's not done by default, so I added converter with the following method implemented: public Object getAsObject(FacesContext arg0, UIComponent arg1, String arg2) throws ConverterException { if (StringUtils.isEmpty(arg2)) { return null; } float result = Float.parseFloat(arg2); if (result == 0) { return null; } return result; } I registered converter, and assigned it to inputText component. I logged arg2 argument, and also logged return value from getAsObject method. By my log I can see that it returns null value. But, I also log setter property on backing bean and argument is 0 value, not null as expected. To be more precise, it is setter property is called twice, once with null argument, second time with 0 value argument. It still sets backing bean value to 0. How can I set value to null? Thanks in advance.

    Read the article

  • Parse a CSV file using python (to make a decision tree later)

    - by Margaret
    First off, full disclosure: This is going towards a uni assignment, so I don't want to receive code. :). I'm more looking for approaches; I'm very new to python, having read a book but not yet written any code. The entire task is to import the contents of a CSV file, create a decision tree from the contents of the CSV file (using the ID3 algorithm), and then parse a second CSV file to run against the tree. There's a big (understandable) preference to have it capable of dealing with different CSV files (I asked if we were allowed to hard code the column names, mostly to eliminate it as a possibility, and the answer was no). The CSV files are in a fairly standard format; the header row is marked with a # then the column names are displayed, and every row after that is a simple series of values. Example: # Column1, Column2, Column3, Column4 Value01, Value02, Value03, Value04 Value11, Value12, Value13, Value14 At the moment, I'm trying to work out the first part: parsing the CSV. To make the decisions for the decision tree, a dictionary structure seems like it's going to be the most logical; so I was thinking of doing something along these lines: Read in each line, character by character If the character is not a comma or a space Append character to temporary string If the character is a comma Append the temporary string to a list Empty string Once a line has been read Create a dictionary using the header row as the key (somehow!) Append that dictionary to a list However, if I do things that way, I'm not sure how to make a mapping between the keys and the values. I'm also wondering whether there is some way to perform an action on every dictionary in a list, since I'll need to be doing things to the effect of "Everyone return their values for columns Column1 and Column4, so I can count up who has what!" - I assume that there is some mechanism, but I don't think I know how to do it. Is a dictionary the best way to do it? Would I be better off doing things using some other data structure? If so, what?

    Read the article

  • Prime Numbers in C?

    - by Ali Azam Rana
    FIRST PROGRAM #include<stdio.h> void main() { int n,c; printf("enter a numb"); scanf("%i",n); for(c=2;c<=n;c++) { if(n%c==0) break; } if(c==n) printf("\nprime\n"); else printf("\nnot prime\n"); getchar(); } SECOND PROGRAM #include <stdio.h> int main() { printf("Enter a Number\n"); int in,loop,rem,chk; scanf("%d",&in); for (loop = 1; loop <=in; loop++) { rem = in % loop; if(rem == 0) chk = chk +1; } if (chk == 2) printf("\nPRIME NUM ENTERED\n"); else printf("\nNUM ENTERED NOT PRIME\n"); getchar(); } the 2nd program works other was the one my friend wrote the program looks fine but on checking it by stepping into we found that the if condition in first program is coming true under every input so whats the logical error here please help me found out......

    Read the article

  • Correct way to do timer function in Python

    - by bwawok
    Hi. I have a GUI application that needs to do something simple in the background (update a wx python progress bar, but that doesn't really matter). I see that there is a threading.timer class.. but there seems to be no way to make it repeat. So if I use the timer, I end up having to make a new thread on every single execution... like : import threading import time def DoTheDew(): print "I did it" t = threading.Timer(1, function=DoTheDew) t.daemon = True t.start() if __name__ == '__main__': t = threading.Timer(1, function=DoTheDew) t.daemon = True t.start() time.sleep(10) This seems like I am making a bunch of threads that do 1 silly thing and die.. why not write it as : import threading import time def DoTheDew(): while True: print "I did it" time.sleep(1) if __name__ == '__main__': t = threading.Thread(target=DoTheDew) t.daemon = True t.start() time.sleep(10) Am I missing some way to make a timer keep doing something? Either of these options seems silly... I am looking for a timer more like a java.util.Timer that can schedule the thread to happen every second... If there isn't a way in Python, which of my above methods is better and why?

    Read the article

  • Flex DataGridColumn with array of objects as data provider

    - by rforte
    I have a datagrid that uses an array of objects as the data provider. The objects are essentially key/value pairs: { foo:"something"} { bar:"hello"} { caca:"lorem"} The datagrid has 2 columns. The first column is the key and the second column is the value. Right now my grid looks like: My dataFormatter function makes sure that depending on the column (i.e. the dataField value) the correct key or value gets printed out. This works fine for displaying. However, as soon as I try and edit the value field it essentially adds a new value into the object with a key of '1'. For example, if I edit the {caca:"lorem"} object it will then contain the value {caca:"lorem",1:"new value"}. Is there any possible way I can set the DataGridColumn so that when I edit a value it will update the value associated with the key rather than inserting a new value? I've tried using a custom item editor but it still does the insert. It seems like I need to be able to update the 'dataField' with the actual key value but I'm not sure how to do that.

    Read the article

  • Detaching all entities of T to get fresh data

    - by Goran
    Lets take an example where there are two type of entites loaded: Product and Category, Product.CategoryId - Category.Id. We have available CRUD operations on products (not Categories). If on another screen Categories are updated (or from another user in the network), we would like to be able to reload the Categories, while preserving the context we currently use, since we could be in the middle of editing data, and we do not want changes to be lost (and we cannot depend on saving, since we have incomplete data). Since there is no easy way to tell EF to get fresh data (added, removed and modified), we thought of twp possible ways: 1) Getting products attached to context, and categories detached from context. This would mean that we loose the ability to access Product.Category.Name, which we do sometimes require, so we would need to manually resolve it (example when printing data). 2) detaching / attaching all Categories from current context. Context.ChangeTracker.Entries().Where(x => x.Entity.GetType() == typeof(T)).ForEach(x => x.State = EntityState.Detached); And then reload the categories, which will get fresh data. Do you find any problem with this second approach? We understand that this will require all constraints to be put on foreign keys, and not navigation properties, since when detaching all Categories, Product.Category navigation properties would be reset to null also. Also, there could be a potential performance problem, which we did not test, since there could be couple of thousand products loaded, and all would need to resolve navigation property when reloading. Which of the two do you prefer, and is there a better way (EF6 + .NET 4.0)?

    Read the article

  • Attach an entity that is not new, perhaps having been loaded from another DataContext. LINQ to SQL -

    - by soldieraman
    Alright How I got this error I got one application sitting on a server 2 users accessing this application - doing some bulk data processing . eg. entering values and then the application is working with another system to extract values for them and then saving. I can't recreate the error The error logs show: The error happend at the same time in both the application Both happend on a Attach/Submit (but two different functions) There is no way they are using the same DataContext object as I save the DataContext in the HttpContext.Items My hunch / guess is: One datacontext was not refreshed i.e. the an object was created for the same item twice as it was new in both the forms. eg. Customer Number - a customer was created (as one couldn't be found) by one datacontext - the other one couldn't find it either (i am using compiled queries to find it in the datacontext) so it created another object and on attaching failed. The HttpContext.Items lost its value somehow (i am using a virtual pc as server - maybe something went wrong there) I am going more of the second as I can't recreate the error - but it just might be a timing (for attach/save) thing - also the error makes me think of the 2nd too.

    Read the article

  • jboss cache as hibernate 2nd level - cluster node doesn't persist replicated data

    - by Sergey Grashchenko
    I'm trying to build an architecture basically described in user guide http://www.jboss.org/file-access/default/members/jbosscache/freezone/docs/3.2.1.GA/userguide_en/html/cache_loaders.html#d0e3090 (Replicated caches with each cache having its own store.) but having jboss cache configured as hibernate second level cache. I've read manual for several days and played with the settings but could not achieve the result - the data in memory (jboss cache) gets replicated across the hosts, but it's not persisted in the datasource/database of the target (not original) cluster host. I had a hope that a node might become persistent at eviction, so I've got a cache listener and attached it to @NoveEvicted event. I found that though I could adjust eviction policy to fully control it, no any persistence takes place. Then I had a though that I could try to modify CacheLoader to set "passivate" to true, but I found that in my case (hibernate 2nd level cache) I don't have a way to access a loader. I wonder if replicated data persistence is possible at all by configuration tuning ? If not, will it work for me to create some manual peristence in CacheListener (I could check whether the eviction event is local, and if not - persist it to hibernate datasource somehow) ? I've used mvcc-entity configuration with the modification of cacheMode - set to REPL_ASYNC. I've also played with the eviction policy configuration. Last thing to mention is that I've tested entty persistence and replication in project that has been generated with Seam. I guess it's not important though.

    Read the article

  • How do you keep text from wrapping in an NSTableView using NSAttributedString

    - by Justin
    I have an NSTableView that has 2 columns, one for an icon and the other for two lines of text. In the second column, the text column, I have some larger text that is for the name of an item. Then I have a new line and some smaller text that describes the state of the item. When the name becomes so large that it doesn't fit on one line it wraps (or when you shrink the window down so small that it causes the names to not fit on a single line). row1=============== | image |  some name   | | image |   idle               | row2================ | image |  some name really long name   | <- this gets wrapped pushing 'idle' out of the view | image |   idle               | =================== My question is, how could I keep the text from wrapping and just have the NSTableView display a horizontal scroll-bar once the name is too large to fit?

    Read the article

  • Getting a Specified Cast is not valid while importing data from Excel using Linq to SQL

    - by niceoneishere
    This is my second post. After learning from my first post how fantastic is to use Linq to SQL, I wanted to try to import data from a Excel sheet into my SQL database. First My Excel Sheet: it contains 4 columns namely ItemNo ItemSize ItemPrice UnitsSold I have a created a database table with the following fields table name ProductsSold Id int not null identity --with auto increment set to true ItemNo VarChar(10) not null ItemSize VarChar(4) not null ItemPrice Decimal(18,2) not null UnitsSold int not null Now I created a dal.dbml file based on my database and I am trying to import the data from excel sheet to db table using the code below. Everything is happening on click of a button. private const string forecast_query = "SELECT ItemNo, ItemSize, ItemPrice, UnitsSold FROM [Sheet1$]"; protected void btnUpload_Click(object sender, EventArgs e) { var importer = new LinqSqlModelImporter(); if (fileUpload.HasFile) { var uploadFile = new UploadFile(fileUpload.FileName); try { fileUpload.SaveAs(uploadFile.SavePath); if(File.Exists(uploadFile.SavePath)) { importer.SourceConnectionString = uploadFile.GetOleDbConnectionString(); importer.Import(forecast_query); gvDisplay.DataBind(); pnDisplay.Visible = true; } } catch (Exception ex) { Response.Write(ex.Source.ToString()); lblInfo.Text = ex.Message; } finally { uploadFile.DeleteFileNoException(); } } } // Now here is the code for LinqSqlModelImporter public class LinqSqlModelImporter : SqlImporter { public override void Import(string query) { // importing data using oledb command and inserting into db using LINQ to SQL using (var context = new WSDALDataContext()) { using (var myConnection = new OleDbConnection(base.SourceConnectionString)) using (var myCommand = new OleDbCommand(query, myConnection)) { myConnection.Open(); var myReader = myCommand.ExecuteReader(); while (myReader.Read()) { context.ProductsSolds.InsertOnSubmit(new ProductsSold() { ItemNo = myReader.GetString(0), ItemSize = myReader.GetString(1), ItemPrice = myReader.GetDecimal(2), UnitsSold = myReader.GetInt32(3) }); } } context.SubmitChanges(); } } } can someone please tell me where am I making the error or if I am missing something, but this is driving me nuts. When I debugged I am getting this error when casting from a number the value must be a number less than infinity I really appreciate it

    Read the article

  • How to handle management trying to interfere with the project (including architecture decision)

    - by Zwei Steinen
    I feel this is not a very good question to post on SO, but I need some advice from experienced developers... (I'm a second year developer) I guess this is a problem to many, many projects, but in our case, it is getting intense. There were so much interference from people that don't know a bit about software development, that our development came to an almost complete stop. We had to literary escape to another location to get any useful job done. Now we were happily producing results, but then I get a request for a "meeting" and it's them again. I have a friendly relationship with them, but I feel very daunted at the thought of talking about non-sense all over again. Should I be firm and tell them to shut up and wait for our results? Or should I be diplomatic and create an illusion they are making a positive contribution or something?? My current urge is to be unfriendly and murmur some stuff so they will give up or something. What would you do if you were in this situation?

    Read the article

  • How might I wrap the FindXFile-style APIs to the STL-style Iterator Pattern in C++?

    - by BillyONeal
    Hello everyone :) I'm working on wrapping up the ugly innards of the FindFirstFile/FindNextFile loop (though my question applies to other similar APIs, such as RegEnumKeyEx or RegEnumValue, etc.) inside iterators that work in a manner similar to the Standard Template Library's istream_iterators. I have two problems here. The first is with the termination condition of most "foreach" style loops. STL style iterators typically use operator!= inside the exit condition of the for, i.e. std::vector<int> test; for(std::vector<int>::iterator it = test.begin(); it != test.end(); it++) { //Do stuff } My problem is I'm unsure how to implement operator!= with such a directory enumeration, because I do not know when the enumeration is complete until I've actually finished with it. I have sort of a hack together solution in place now that enumerates the entire directory at once, where each iterator simply tracks a reference counted vector, but this seems like a kludge which can be done a better way. The second problem I have is that there are multiple pieces of data returned by the FindXFile APIs. For that reason, there's no obvious way to overload operator* as required for iterator semantics. When I overload that item, do I return the file name? The size? The modified date? How might I convey the multiple pieces of data to which such an iterator must refer to later in an ideomatic way? I've tried ripping off the C# style MoveNext design but I'm concerned about not following the standard idioms here. class SomeIterator { public: bool next(); //Advances the iterator and returns true if successful, false if the iterator is at the end. std::wstring fileName() const; //other kinds of data.... }; EDIT: And the caller would look like: SomeIterator x = ??; //Construct somehow while(x.next()) { //Do stuff } Thanks! Billy3

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >