Search Results

Search found 25005 results on 1001 pages for 'sequential number'.

Page 246/1001 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • How to run an application as root without asking for an admin password?

    - by kvaruni
    I am writing a program in Objective-C (XCode 3.2, on Snow Leopard) that is capable of either selectively blocking certain sites for a duration or only allow certain sites (and thus block all others) for a duration. The reasoning behind this program is rather simple. I tend to get distracted when I have full internet access, but I do need internet access during my working hours to get to a number of work-related websites. Clearly, this is not a permanent block, but only helps me to focus whenever I find myself wandering a bit too much. At the moment, I am using a Unix script that is called via AppleScript to obtain Administrator permissions. It then activates a number of ipfw rules and clears those after a specific duration to restore full internet access. Simple and effective, but since I am running as a standard user, it gets cumbersome to enter my administrator password each and every time I want to go "offline". Furthermore, this is a great opportunity to learn to work with XCode and Objective-C. At the moment, everything works as expected, minus the actual blocking. I can add a number of sites in a list, specify whether or not I want to block or allow these websites and I can "start" the blocking by specifying a time until which I want to stay "offline". However, I find it hard to obtain clear information on how I can run a privileged Unix command from Objective-C. Ideally, I would like to be able to store information with respect to the Administrator account into the Keychain to use these later on, so that I can simply move into "offline" mode with the convenience of clicking a button. Even more ideally, there might be some class in Objective-C with which I can block access to some/all websites for this particular user without needing to rely on privileged Unix commands. A third possibility is in starting this program with root permissions and the reducing the permissions until I need them, but since this is a GUI application that is nested in the menu bar of OS X, the results are rather awkward and getting it to run each and every time with root permission is no easy task. Anyone who can offer me some pointers or advice? Please, no security-warnings, I am fully aware that what I want to do is a potential security threat.

    Read the article

  • how big should your controllers be in asp.net-mvc

    - by ooo
    i see the new feature of areas in asp.net-mvc 2. it got me thinking. why would i need this? i did some reading on the use cases and it came down to a specific point to me around how big and how broad scope should my controllers should be? should i try to have many little controllers? one big controller? how do people determine the sweet spot for number of controllers? i think mine are maybe too large (which had me questioning areas in the first place as maybe my controller name should really be an area and have a number of smaller controllers)

    Read the article

  • How do I create a foreign key in SQL Server?

    - by mmattax
    I have never "hand coded" creation code for SQL Server and foreign key deceleration is seemingly different from SQL Server and Postgres...here is my sql so far: drop table exams; drop table question_bank; drop table anwser_bank; create table exams ( exam_id uniqueidentifier primary key, exam_name varchar(50), ); create table question_bank ( question_id uniqueidentifier primary key, question_exam_id uniqueidentifier not null, question_text varchar(1024) not null, question_point_value decimal, constraint question_exam_id foreign key references exams(exam_id) ); create table anwser_bank ( anwser_id uniqueidentifier primary key, anwser_question_id uniqueidentifier, anwser_text varchar(1024), anwser_is_correct bit ); when I run the query I get this error: Msg 8139, Level 16, State 0, Line 9 Number of referencing columns in foreign key differs from number of referenced columns, table 'question_bank'. Can you spot the error? thanks.

    Read the article

  • easiest way to convert virtualbox snapshots to tree view

    - by amir beygi
    HI all My virtual box snapshot view is like this Name: Snapshot 2 (UUID: cb45aef4-54d4-4c4e-ad3e-dd7cccb6103a) Name: s131 (UUID: 8ec30c82-7796-4e51-8161-979f1b95fb0f) Name: s131 (UUID: 42066f33-969b-41f3-a779-7f6e2c45ea2c) Name: s131 (UUID: d71b9bc5-b862-46b5-ae4d-f88d3dd9756d) Name: s131 (UUID: 681896a9-7e61-4b5a-90bc-cb1bd785c6fc) Name: s131 (UUID: d7bf8593-d218-442d-b23b-4ee16e74087d) Name: s131 (UUID: e8b16fd2-7add-4294-b908-34c4e6dc79dc) Name: s131 (UUID: 57c3f5d7-d4ed-4a62-a7b8-5594f819e08e) Name: Snapshot 3 (UUID: 4a684149-9dd6-4bb2-baf5-5f590e91a344) Name: Snapshot 4 (UUID: d4cbaa7c-ae78-41e0-9962-46c587a9c667) Name: Snapshot 5 (UUID: 81567b6e-eea9-49a6-b3b8-a07f0be337d8) * and i want to convert this text to a tree like this Name: Snapshot 2 (UUID: cb45aef4-54d4-4c4e-ad3e-dd7cccb6103a) +--Name: s131 (UUID: 8ec30c82-7796-4e51-8161-979f1b95fb0f) +--Name: s131 (UUID: 42066f33-969b-41f3-a779-7f6e2c45ea2c) +--Name: s131 (UUID: d71b9bc5-b862-46b5-ae4d-f88d3dd9756d) +--Name: s131 (UUID: 681896a9-7e61-4b5a-90bc-cb1bd785c6fc) | +--Name: s131 (UUID: d7bf8593-d218-442d-b23b-4ee16e74087d) | +--Name: s131 (UUID: e8b16fd2-7add-4294-b908-34c4e6dc79dc) | +--Name: s131 (UUID: 57c3f5d7-d4ed-4a62-a7b8-5594f819e08e) +--Name: Snapshot 3 (UUID: 4a684149-9dd6-4bb2-baf5-5f590e91a344) +--Name: Snapshot 4 (UUID: d4cbaa7c-ae78-41e0-9962-46c587a9c667) +--Name: Snapshot 5 (UUID: 81567b6e-eea9-49a6-b3b8-a07f0be337d8) * or even an array that contents line number and parent's line number. My environment is linux, programming language is C, and i got this results from this shell command VBoxManage snapshot s2000 showvminfo s|grep Name|grep UUID

    Read the article

  • C# parsing txt files IF name format is desired format

    - by jakesankey
    OK, I have txt files that I am parsing and saving into a sql db. The names are formatted like R306025COMP_272A4075_20090929_080159.txt However, there are a select few (out of thousands of files) with names that are formatted differently (particularly files that were generated as tests), example R306025COMP_SU2_TestBottom_20090915_101441.txt The reason this causes a problem for me is that I am using Split('_')[1,2,etc] to extract the R number, the 272A4075 portion, and the 20090929 (date) portion. When the application comes across the oddly named files, it fails because it is trying to parse 'TestBottom' as a date and inserts 'SU2' instead of the 272 number. Basically I want the app to recognize that if the file's name is not formatted like my first example, skip it. Any advice?

    Read the article

  • Most efficient method to query a Young Tableau

    - by Matthieu M.
    A Young Tableau is a 2D matrix A of dimensions M*N such that: i,j in [0,M)x[0,N): for each p in (i,M), A[i,j] <= A[p,j] for each q in (j,N), A[i,j] <= A[i,q] That is, it's sorted row-wise and column-wise. Since it may contain less than M*N numbers, the bottom-right values might be represented either as missing or using (in algorithm theory) infinity to denote their absence. Now the (elementary) question: how to check if a given number is contained in the Young Tableau ? Well, it's trivial to produce an algorithm in O(M*N) time of course, but what's interesting is that it is very easy to provide an algorithm in O(M+N) time: Bottom-Left search: Let x be the number we look for, initialize i,j as M-1, 0 (bottom left corner) If x == A[i,j], return true If x < A[i,j], then if i is 0, return false else decrement i and go to 2. Else, if j is N-1, return false else increment j This algorithm does not make more than M+N moves. The correctness is left as an exercise. It is possible though to obtain a better asymptotic runtime. Pivot Search: Let x be the number we look for, initialize i,j as floor(M/2), floor(N/2) If x == A[i,j], return true If x < A[i,j], search (recursively) in A[0:i-1, 0:j-1], A[i:M-1, 0:j-1] and A[0:i-1, j:N-1] Else search (recursively) in A[i+1:M-1, 0:j], A[i+1:M-1, j+1:N-1] and A[0:i, j+1:N-1] This algorithm proceed by discarding one of the 4 quadrants at each iteration and running recursively on the 3 left (divide and conquer), the master theorem yields a complexity of O((N+M)**(log 3 / log 4)) which is better asymptotically. However, this is only a big-O estimation... So, here are the questions: Do you know (or can think of) an algorithm with a better asymptotical runtime ? Like introsort prove, sometimes it's worth switching algorithms depending on the input size or input topology... do you think it would be possible here ? For 2., I am notably thinking that for small size inputs, the bottom-left search should be faster because of its O(1) space requirement / lower constant term.

    Read the article

  • Convert scientific notation to decimal notation

    - by Ankur
    There is a similar question on SO which suggests using NumberFormat which is what I have done. I am using the parse() method of NumberFormat. public static void main(String[] args) throws ParseException{ DecToTime dtt = new DecToTime(); dtt.decToTime("1.930000000000E+02"); } public void decToTime(String angle) throws ParseException{ DecimalFormat dform = new DecimalFormat(); //ParsePosition pp = new ParsePosition(13); Number angleAsNumber = dform.parse(angle); System.out.println(angleAsNumber); } The result I get is 1.93 I didn't really expect this to work because 1.930000000000E+02 is a pretty unusual looking number, do I have to do some string parsing first to remove the zeros? Or is there a quick and elegant way?

    Read the article

  • asp.net InsertCommand to return latest insert ID

    - by Stijn Van Loo
    Dear all, I'm unable to retrieve the latest inserted id from my SQL Server 2000 db using a typed dataset in asp.NET I have created a tableadapter and I ticked the "Refresh datatable" and "Generate Insert, Update and Delete statements". This auto-generates the Fill and GetData methods, and the Insert, Update, Select and Delete statements. I have tried every possible solution in this thread http://forums.asp.net/t/990365.aspx but I'm still unsuccesfull, it always returns 1(=number of affected rows). I do not want to create a seperate insert method as the auto-generated insertCommand perfectly suits my needs. As suggested in the thread above, I have tried to update the InsertCommand SQL syntax to add SELECT SCOPY_IDENTITY() or something similar, I have tried to add a parameter of type ReturnValue, but all I get is the number of affected rows. Does anyone has a different take on this? Thanks in advance! Stijn

    Read the article

  • No_data_found exception is progating in outer block also?

    - by Vineet
    In my code i am entering the salary which is not available in employees table and then again inserting duplicate employee_id in primary key column of employee table in exception block where i am handling no data found exception but i do not why 'No data found exception in the end also? OUTPUT coming: Enter some other sal ORA-01400: cannot insert NULL into ("SCOTT"."EMPLOYEES"."LAST_NAME") ORA-01403: no data found --This should not come according to logic This is the code DECLARE v_sal number:=&p_sal; v_num number; BEGIN BEGIN select salary INTO v_num from employees where salary=v_sal; EXCEPTION WHEN no_data_found THEN DBMS_OUTPUT.PUT_LINE('Enter some other sal'); INSERT INTO employees (employee_id)values(100) ; END; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE(sqlerrm); END;

    Read the article

  • how to send a dynamic set of parameters to webmethod using jquery ajax method?

    - by kranthi
    hi, I am using jquery ajax method on my aspx page,which will invoke the webmethod in the code behind.Currently the webmethod takes a couple of parameters like firstname,lastname,address etc which I am passing from jquery ajax method using data:JSON.stringify({fname:firstname,lname:lastname,city:city}) now my requirement has been changed such that,the number and type of parameters that are going to be passed is not fixed for ex.parameter combination can be something like fname,city or fname,city or city,lname or fname,lname,city or something else.So the webmethod should be such that it should accept any number parameters.I thought of using arrays to do so, as described here. But I do not understand how can I identify which and how many parameters have been passed to the webmethod to insert/update the data to the DB.Please could someone help me with this? thanks

    Read the article

  • load images dynamically on scroll in blackberry

    - by Maneesh
    How to display images in the form of pages where only one page is displayed at a time on blackberry screen. On scrolling down subsequent images will load at run time. So that loading of images do not consume time at startup. Edit: I am using loadimage function which loads images from blackberry device memory which loads images from specified path and resizing them.As number of images increases, it increases the startup time during opening of window.There is an in-built application(Media) in blackberry phone, where images load without taking any extra time. My idea is to display particular number of images which fit to the blackberry screen. As user scroll down to bottom of screen, application will then load and display more images. So my question is how to detect when user reached to bottom of blackberry screen and display one more row images.

    Read the article

  • Adding unique objects to Core Data

    - by absolut
    I'm working on an iPhone app that gets a number of objects from a database. I'd like to store these using Core Data, but I'm having problems with my relationships. A Detail contains any number of POIs (points of interest). When I fetch a set of POI's from the server, they contain a detail ID. In order to associate the POI with the Detail (by ID), my process is as follows: Query the ManagedObjectContext for the detailID. If that detail exists, add the poi to it. If it doesn't, create the detail (it has other properties that will be populated lazily). The problem with this is performance. Performing constant queries to Core Data is slow, to the point where adding a list of 150 POI's takes a minute thanks to the multiple relationships involved. In my old model, before Core Data (various NSDictionary cache objects) this process was super fast (look up a key in a dictionary, then create it if it doesn't exist) I have more relationships than just this one, but pretty much every one has to do this check (some are many to many, and they have a real problem). Does anyone have any suggestions for how I can help this? I could perform fewer queries (by searching for a number of different ID's), but I'm not sure how much this will help. Some code: POI *poi = [NSEntityDescription insertNewObjectForEntityForName:@"POI" inManagedObjectContext:[(AppDelegate*)[UIApplication sharedApplication].delegate managedObjectContext]]; poi.POIid = [attributeDict objectForKey:kAttributeID]; poi.detailId = [attributeDict objectForKey:kAttributeDetailID]; Detail *detail = [self findDetailForID:poi.POIid]; if(detail == nil) { detail = [NSEntityDescription insertNewObjectForEntityForName:@"Detail" inManagedObjectContext:[(AppDelegate*)[UIApplication sharedApplication].delegate managedObjectContext]]; detail.title = poi.POIid; detail.subtitle = @""; detail.detailType = [attributeDict objectForKey:kAttributeType]; } -(Detail*)findDetailForID:(NSString*)detailID { NSManagedObjectContext *moc = [[UIApplication sharedApplication].delegate managedObjectContext]; NSEntityDescription *entityDescription = [NSEntityDescription entityForName:@"Detail" inManagedObjectContext:moc]; NSFetchRequest *request = [[[NSFetchRequest alloc] init] autorelease]; [request setEntity:entityDescription]; NSPredicate *predicate = [NSPredicate predicateWithFormat: @"detailid == %@", detailID]; [request setPredicate:predicate]; NSLog(@"%@", [predicate description]); NSError *error; NSArray *array = [moc executeFetchRequest:request error:&error]; if (array == nil || [array count] != 1) { // Deal with error... return nil; } return [array objectAtIndex:0]; }

    Read the article

  • WPF Application Typing in Custom TextBox CPU Jumping from 3 to 80 percent

    - by azamsharp
    I have created a RichTextBox called SharpTextBox which indicates and limits the number of characters that can be typed in it. The implementation is shown in the following link: http://www.highoncoding.com/Articles/673_Creating_SharpRichTextBox_for_Live_Character_Count_in_WPF.aspx Anyway when I start typing in the TextBox it goes from 3% to 78%. The TextBox updates the Label control which shows the number of characters remaining for the count. How can I increase the performance of the textbox? UPDATE: I read there seems to be some problem with the TextRange.Text property which kills performance.

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • Best way to change Satchmo checkout page fields?

    - by konrad
    For a Satchmo project we have to change the fields a customer has to fill out during checkout. Specifically, we have to: Add a 'middle name' field Replace the bill and delivery addressee with separate first, middle and last name fields Replace the two address lines with street, number and number extension These fields are expected by an upstream web service, so we need to store this data separately. What's the best way to achieve this with minimal changes in the rest of Satchmo? We prefer a solution in which we do not have to change the Satchmo code itself, but if required we can fork it.

    Read the article

  • Problem with numbers

    - by StolePopov
    I am given a number N, and i must add some numbers from the array V so that they wil be equal. V is consisting of numbers that are all powers of 3: N = 17 S = 0 V = 1 3 9 27 81 .. I should add numbers from V to N and S in order to make them equal. The solution to the example above is : 17 + 1 + 9 = 27, 27, 1 and 9 are taken from V, a number from V can be taken only once, and when taken it's removed from V. I tried sorting V and then adding the biggest numbers from V to S until S has reached N, but it fails on some tests when it's like: N = 7 S = 0 V = 1 3 9 27 So the solution will be: 7 + 3 = 9 + 1 In examples like this i need to add numbers both to N and S, and also select them so they become equal. Any idea of solving this ? Thanks.

    Read the article

  • maximum execution time for javascript

    - by Andrew Chang
    I know both ie and firefox have limits for javascript execution here: hxxp://support.microsoft.com/?kbid=175500 Based on number of statements executed, I heard it was 5 million somewhere in ie hxxp://webcache.googleusercontent.com/search?q=cache:_iKHhdfpN-MJ:kb.mozillazine.org/Dom.max_script_run_time+dom.max_script_run_time&hl=en&gl=ca&strip=1 (google cache since site takes forever to load for me) based on number of seconds in firefox, it's 10 seconds by default for my version The thing I don't get is what cases will go over these limits: I'm sure a giant loop will go over the limit for execution time But will an event hander go over the limit, if itself it's execution time is under the limit but if it occurs multiple times? Example: Lets say I have a timer on my page, that executes some javascript every 20 seconds. The execution time for the timer handler is 1 second. Does firefox and ie treat each call of the timer function seperatly, so it never goes over the limit, or is it that firefox/ie adds up the time of each call so after the handler finishes, so after 200 seconds on my site (with the timer called 10 times) an error occurs even though the timer handler itself is only 1 second long?

    Read the article

  • Do you still limit line length in code?

    - by Noldorin
    This is a matter on which I would like to gauge the opinion of the community: Do you still limit the length of lines of code to a fixed maximum? This was certainly a convention of the past for many languages; one would typically cap the number of characters per line to a value such as 80 (and more recnetly 100 or 120 I believe). As far as I understand, the primary reasons for limiting line length are: Readability - You don't have to scroll over horizontally when you want to see the end of some lines. Printing - Admittedly (at least in my experience), most code that you are working on does not get printed out on paper, but by limiting the number of characters you can insure that formatting doesn't get messed up when printed. Past editors (?) - Not sure about this one, but I suspect that at some point in the distant past of programming, (at least some) text editors may have been based on a fixed-width buffer. I'm sure there are points that I am still missing out, so feel free to add to these... Now, when I tend to observe C or C# code nowadays, I often see a number of different styles, the main ones being: Line length capped to 80, 100, or even 120 characters. As far as I understand, 80 is the traditional length, but the longer ones of 100 and 120 have appeared because of the widespread use of high resolutions and widescreen monitors nowadays. No line length capping at all. This tends to be pretty horrible to read, and I don't see it too often, though it's certainly not too rare either. Inconsistent capping of line length. The length of some lines are limited to a fixed maximum (or even a maximum that changes depending on the file/location in code), while others (possibly comments) are not at all. My personal preference here (at least recently) has been to cap the line length to 100 in the Visual Studio editor. This means that in a decently sized window (on a non-widescreen monitor), the ends of lines are still fully visible. I can however see a few disadvantages in this, especially when you end up writing code that's indented 3 or 4 levels and then having to include a long string literal - though I often take this as a sign to refactor my code! In particular, I am curious what the C and C# coders (or anyone who uses Visual Studio for that matter) think about this point, though I would be interested in hearing anyone's thoughts on the subject. Edit Thanks for the all answers - I appreciate the variety of opinions here, all presenting sound reasons. Consensus does seem to be tipping in the direction of always (or almost always) limit the line length. Interestingly, it seems to be in various coding standards to limit the line length. Judging by some of the answers, both the Python and Google CPP guidelines set the limit at 80 chars. I haven't seen anything similar regarding C# or VB.NET, but I would be curious to see if there are ones anywhere.

    Read the article

  • Emma - Block Coverage vs Line Coverage

    - by MasterGaurav
    I have a strange scenario... while doing a EMMA coverage for UT, I get the total block coverage size more than line coverage size. For block coverage, the total size is some 50,000 while the line coverage is out of 18,000. I get (block-coverage-value) / 50,000 and (line-coverage-value) / 18,000 in the report. Is it possible? How can the number of blocks be more than the number of lines in code? btw, you can assume that I know what Block Coverage is: http://emma.sourceforge.net/faq.html#q.blockcoverage

    Read the article

  • Disabling scoring in Lucene(.NET)

    - by user72185
    Hi, When searching, is there a way to disable scoring for any query? The scenario is that the user refines his query by trying different combinations of words, phrases etc., and needs realtime (well, reasonably fast at least) responses on the number of hits. Search time slows down a lot when there are millions of hits due to scoring, but the user really doesn't care about all these documents. As soon as he sees there are 1M+ hits he will start adding additional words to the query. A "Sort by relevance" option would allow him to do this quickly, while turning scoring back on when the number of hits is reasonable. Is this possible? I'm using Lucene.NET 2.9.2 but AFAIK it is identical to the Java version.

    Read the article

  • Is x a reserved keyword in Javascript FF/Safari not in IE?

    - by Marco Demaio
    A web page of a web application was showing a strange error. I regressively removed all the HTML/CSS/JS code and arrived to the basic and simple code below. <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html><head> <title>test</title> <script type="text/javascript"> var TestObj = { foo: function() {} } alert(x); //ok displays "undefined" var x = TestObj.foo; var z = TestObj.foo; </script> </head><body> <p onclick='alert(x);'>Click shows function foo!</p> <img onclick='alert(x);' alt='CRAZY click displays a number in FF/Safari not function foo' src='' style='display: block; width: 100px; height: 100px; border: 1px solid #00ff00;'> <p onclick='alert(x);'>Click shows function foo!</p> </body></html> It's crazy: when clicking on P elements the string "function(){}" is displaied as expected. But when clicking on IMG element it shows a number as if x function got in some way removed from memory or deinstantiated (it does not even show x as "undefined" but as a number). To let you test it quickly I placed the working test above also here. This can be reproduced on both Firefox 3.6 and Safari 4.0.4. Everything works properly only on IE7+. I'm really clueless, I was wondering if x is maybe a reserved keyword in JS Firefox/Safari. Thanks to anyone who could help! FYI: if you repalce x() with z() everything work prefectly in all browsers (this is even more crazy to me) adding a real image in src attribute does not fix the problem removing style in img does not fix the problem (i gave style to image only to help you clicking on image thus you can see the imnage border)

    Read the article

  • How to determine Windows.Diagnostics.Process from ServiceController

    - by Alex
    This is my first post, so let me start by saying HELLO! I am writing a windows service to monitor the running state of a number of other windows services on the same server. I'd like to extend the application to also print some of the memory statistics of the services, but I'm having trouble working out how to map from a particular ServiceController object to its associated Diagnostics.Process object, which I think I need to determine the memory state. I found out how to map from a ServiceController to the original image name, but a number of the services I am monitoring are started from the same image, so this won't be enough to determine the Process. Does anyone know how to get a Process object from a given ServiceController? Perhaps by determining the PID of a service? Or else does anyone have another workaround for this problem? Many thanks, Alex

    Read the article

  • Code Golf Christmas Edition: How to print out a Christmas tree of height N

    - by TheSoftwareJedi
    Given a number N, how can I print out a Christmas tree of height N using the least number of code characters? N is assumed constrained to a min val of 3, and a max val of 30 (bounds and error checking are not necessary). N is given as the one and only command line argument to your program or script. All languages appreciated, if you see a language already implemented and you can make it shorter, edit if possible - comment otherwise and hope someone cleans up the mess. Include newlines and whitespace for clarity, but don't include them in the character count. A Christmas tree is generated as such, with its "trunk" consisting of only a centered "*" N = 3: * *** ***** * N = 4: * *** ***** ******* * N = 5: * *** ***** ******* ********* * N defines the height of the branches not including the one line trunk. Merry Christmas SO!

    Read the article

  • Flexible argument list in LibreOffice Calc Macro

    - by Patru
    I want to write a function that geometrically links performance data which is usually provided as percentages, so the function will basically return (1+a)*(1+b)*(1+c)* … *(1+x)-1 This should be done using LibreOffice-calc and it should behave similarly to the regular sum function. As you may throw any number of arguments at sum I would like to be able to do the same with my alternative geoSum function but I am unable to find suitable documentation on handling a variable number of arguments with variable types (i.e. an arbitrary mix of numbers, cells and ranges). How would I have to specify the arguments to my LibreOffice-Basic function and how would I have to interpret it?

    Read the article

  • webservice request issue with dynamic request inputs

    - by nanda
    try { const string siteURL = "http://ops.epo.org/2.6.1/soap-services/document-retrieval"; const string docRequest = "<soap:Envelope xmlns:soap='http://schemas.xmlsoap.org/soap/envelope/' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:xsd='http://www.w3.org/2001/XMLSchema'><soap:Body><document-retrieval id='EP 1000000A1 I ' page-number='1' document-format='SINGLE_PAGE_PDF' system='ops.epo.org' xmlns='http://ops.epo.org' /></soap:Body></soap:Envelope>"; var request = (HttpWebRequest)WebRequest.Create(siteURL); request.Method = "POST"; request.Headers.Add("SOAPAction", "\"document-retrieval\""); request.ContentType = " text/xml; charset=utf-8"; Stream stm = request.GetRequestStream(); byte[] binaryRequest = Encoding.UTF8.GetBytes(docRequest); stm.Write(binaryRequest, 0, docRequest.Length); stm.Flush(); stm.Close(); var memoryStream = new MemoryStream(); WebResponse resp = request.GetResponse(); var buffer = new byte[4096]; Stream responseStream = resp.GetResponseStream(); { int count; do { count = responseStream.Read(buffer, 0, buffer.Length); memoryStream.Write(buffer, 0, count); } while (count != 0); } resp.Close(); byte[] memoryBuffer = memoryStream.ToArray(); System.IO.File.WriteAllBytes(@"E:\sample12.pdf", memoryBuffer); } catch (Exception ex) { throw ex; } The code above is to retrieve the pdf webresponse.It works fine as long as the request remains canstant, const string docRequest = "<soap:Envelope xmlns:soap='http://schemas.xmlsoap.org/soap/envelope/' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:xsd='http://www.w3.org/2001/XMLSchema'><soap:Body><document-retrieval id='EP 1000000A1 I ' page-number='1' document-format='SINGLE_PAGE_PDF' system='ops.epo.org' xmlns='http://ops.epo.org' /></soap:Body></soap:Envelope>"; but how to retrieve the same with dynamic requests. When the above code is changed to accept dynamic inputs like, [WebMethod] public string DocumentRetrivalPDF(string docid, string pageno, string docFormat, string fileName) { try { ........ ....... string docRequest = "<soap:Envelope xmlns:soap='http://schemas.xmlsoap.org/soap/envelope/' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:xsd='http://www.w3.org/2001/XMLSchema'><soap:Body><document-retrieval id=" + docid + " page-number=" + pageno + " document-format=" + docFormat + " system='ops.epo.org' xmlns='http://ops.epo.org' /></soap:Body></soap:Envelope>"; ...... ........ return "responseTxt"; } catch (Exception ex) { return ex.Message; } } It return an "INTERNAL SERVER ERROR:500" can anybody help me on this???

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >