Search Results

Search found 10417 results on 417 pages for 'large'.

Page 349/417 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • Get the src part of a string [duplicate]

    - by Kay Lakeman
    This question already has an answer here: Grabbing the href attribute of an A element 7 answers First post ever here, and i really hope you can help. I use a database where a large piece of html is stored, now i just need the src part of the image tag. I already found a thread, but i just doesn't do the trick. My code: Original string: <p><img alt=\"\" src=\"http://domain.nl/cms/ckeditor/filemanager/userfiles/background.png\" style=\"width: 80px; height: 160px;\" /></p> How i start: $image = strip_tags($row['information'], '<img>'); echo stripslashes($image); This returns: <img alt="" src="http://domain.nl/cms/ckeditor/filemanager/userfiles/background.png" style="width: 80px; height: 160px;" /> Next step: extract the src part: preg_match('/< *img[^>]*src *= *["\']?([^"\']*)/i', $image, $matches); echo $matches ; This last echo returns: Array What is going wrong? Thanks in advance for your anwser.

    Read the article

  • Wasteful Ajax Page Loading

    - by Matt Dawdy
    I've started a new job, and the portion of the project I'm working has a very odd structure. Every pages is a .Net aspx page, and it loads just fine, but nothing is really done at load time. Everything is really loaded from a jquery document.onready handler. What is even more...interesting...is that the onready handler calls some ajax calls that drop entire .aspx pages into divs on the page, but first it strips out several parts of the the returned page. This is the "magic" script the previous programmer ran on all the returned html from his ajax calls: function CleanupResponseText(responseText, uniqueName) { responseText = responseText.replace("theForm.submit();", "SubmitSubForm(theForm, $(theForm).parent());"); responseText = responseText.replace(new RegExp("theForm", "g"), uniqueName); responseText = responseText.replace(new RegExp("doPostBack", "g"), "doPostBack" + uniqueName); return responseText; } He then intercepts any kind of form postback and runs his own form submission function: function SubmitSubForm(form, container) { //ShowLoading(container); $(form).ajaxSubmit( { url: $(form).attr("action"), success: function(responseText) { $(container).html(CleanupResponseText(responseText, form.id)); $("form", container).css("margin-top", "0").css("padding-top", "0"); //HideLoading(container); } } ); } Am I way offbase in thinking that this is less than optimal? I mean, how does a browser take out the html and head and other tags that don't have anything to do with what you are really trying to drop into that div? Also, he's returning things like asp:gridview controls, and the associate viewstate, which can be quite large if his dataset is big. Has anyone seen this before?

    Read the article

  • Best Functional Approach

    - by dbyrne
    I have some mutable scala code that I am trying to rewrite in a more functional style. It is a fairly intricate piece of code, so I am trying to refactor it in pieces. My first thought was this: def iterate(count:Int,d:MyComplexType) = { //Generate next value n //Process n causing some side effects return iterate(count - 1, n) } This didn't seem functional at all to me, since I still have side effects mixed throughout my code. My second thought was this: def generateStream(d:MyComplexType):Stream[MyComplexType] = { //Generate next value n return Stream.cons(n, generateStream(n)) } for (n <- generateStream(initialValue).take(2000000)) { //process n causing some side effects } This seemed like a better solution to me, because at least I've isolated my functional value-generation code from the mutable value-processing code. However, this is much less memory efficient because I am generating a large list that I don't really need to store. This leaves me with 3 choices: Write a tail-recursive function, bite the bullet and refactor the value-processing code Use a lazy list. This is not a memory sensitive app (although it is performance sensitive) Come up with a new approach. I guess what I really want is a lazily evaluated sequence where I can discard the values after I've processed them. Any suggestions?

    Read the article

  • Code Golf: Counting XML elements in file on Android

    - by CSharperWithJava
    Take a simple XML file formatted like this: <Lists> <List> <Note/> ... <Note/> </List> <List> <Note/> ... <Note/> </List> </Lists> Each node has some attributes that actually hold the data of the file. I need a very quick way to count the number of each type of element, (List and Note). Lists is simply the root and doesn't matter. I can do this with a simple string search or something similar, but I need to make this as fast as possible. Design Parameters: Must be in java (Android application). Must AVOID allocating memory as much as possible. Must return the total number of Note elements and the number of List elements in the file, regardless of location in file. Number of Lists will typically be small (1-4), and number of notes can potentially be very large (upwards of 1000, typically 100) per file. I look forward to your suggestions.

    Read the article

  • Publish Git repository to SVN

    - by Ken Williams
    I and my small team work in Git, and the larger group uses Subversion. I'd like to schedule a cron job to publish our repositories current HEADs every hour into a certain directory in the SVN repo. I thought I had this figured out, but the recipe I wrote down previously doesn't seem to be working now: git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 cd px2 svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn fetch git rebase trunk master git svn dcommit Here's what happens when I attempt: % git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 Cloning into 'ProjX'... ... % cd px2 % svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX Committed revision 123. % git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX Using higher level of URL: http://me@svnserver/svn/repo/play/me/fromgit/ProjX => http://me@svnserver/svn/repo % git svn fetch W: Ignoring error from SVN, path probably does not exist: (160013): Filesystem has no item: File not found: revision 100, path '/play/me/fromgit/ProjX' W: Do not be alarmed at the above message git-svn is just searching aggressively for old history. This may take a while on large repositories % git rebase trunk master fatal: Needed a single revision invalid upstream trunk I could have sworn this worked previously, anyone have any suggestions? Thanks.

    Read the article

  • C# performance methods of receiving data from a socket?

    - by Daniel
    Lets assume we have a simple internet socket, and its going to send 10 megabytes (because i want to ignore memory issues) of random data through. Is there any performance difference or a best practice method that one should use for receiving data? The final output data should be represented by a byte[]. Yes i know writing an arbitrary amount of data to memory is bad, and if I was downloading a large file i wouldn't be doing it like this. But for argument sake lets ignore that and assume its a smallish amount of data. I also realise that the bottleneck here is probably not the memory management but rather the socket receiving. I just want to know what would be the most efficient method of receiving data. A few dodgy ways can think of is: Have a List and a buffer, after the buffer is full, add it to the list and at the end list.ToArray() to get the byte[] Write the buffer to a memory stream, after its complete construct a byte[] of the stream.Length and read it all into it in order to get the byte[] output. Is there a more efficient/better way of doing this?

    Read the article

  • Rebuilding old (2010) django project in 2012

    - by birgit
    I am trying to make an old Django project run again. After seemingly having solved issues with old sorl.thumbnail versions and deprecated expressions I now get this error when running python manage.py runserver I also tried to copy & paste my old files into a new Django project and get the exactly same error. Maybe someone here has a clue where the problem lies? Unhandled exception in thread started by <bound method Command.inner_run of <django.contrib.staticfiles.management.commands.runserver.Command object at 0x2a80510>> Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/django/core/management/commands/runserver.py", line 88, in inner_run self.validate(display_num_errors=True) File "/usr/lib/python2.7/dist-packages/django/core/management/base.py", line 249, in validate num_errors = get_validation_errors(s, app) File "/usr/lib/python2.7/dist-packages/django/core/management/validation.py", line 35, in get_validation_errors for (app_name, error) in get_app_errors().items(): File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 146, in get_app_errors self._populate() File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 61, in _populate self.load_app(app_name, True) File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 78, in load_app models = import_module('.models', app_name) File "/usr/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module __import__(name) File "/home/me/Documents/wdws/wdws/../wdws/cityofwindows/models.py", line 73, in <module> class Image(models.Model): File "/home/me/Documents/wdws/wdws/../wdws/cityofwindows/models.py", line 83, in Image 'large': {'size': (640, 640)}, File "/usr/lib/python2.7/dist-packages/django/db/models/fields/files.py", line 233, in __init__ super(FileField, self).__init__(verbose_name, name, **kwargs) TypeError: __init__() got an unexpected keyword argument 'extra_thumbnails' I need to re-build the project just for visual documentation locally... so also any hints on how to quickly re-run outdated django-projects are very welcome!! Thanks a lot (using Ubuntu 12.04)

    Read the article

  • Writing out sheet to text file using POI event model

    - by Eduardo Dennis
    I am using XLSX2CSV example to parse large sheets from a workbook. Since I only need to output the data for specific sheets I added an if statement in the process method to test for the specific sheets. When the condition is met I continue with the process. public void process() throws IOException, OpenXML4JException, ParserConfigurationException, SAXException { ReadOnlySharedStringsTable strings = new ReadOnlySharedStringsTable(this.xlsxPackage); XSSFReader xssfReader = new XSSFReader(this.xlsxPackage); StylesTable styles = xssfReader.getStylesTable(); XSSFReader.SheetIterator iter = (XSSFReader.SheetIterator) xssfReader.getSheetsData(); while (iter.hasNext()) { InputStream stream = iter.next(); String sheetName = iter.getSheetName(); if (sheetName.equals("SHEET1")||sheetName.equals("SHEET2")||sheetName.equals("SHEET3")||sheetName.equals("SHEET4")||sheetName.equals("SHEET5")){ processSheet(styles, strings, stream); try { System.setOut(new PrintStream( new FileOutputStream("C:\\Users\\edennis.AD\\Desktop\\test\\"+sheetName+".txt"))); } catch (Exception e) { e.printStackTrace(); } stream.close(); } } } But I need to output text file and not sure how to do it. I tried to use the System.set() method to output everything from system.out to text but that's not working I just get blank files.

    Read the article

  • Dynamic use of :default_url in Paperclip

    - by dgilperez
    I'm trying to configure Paperclip to provide different missing images based on the instance's category attribute. Every category of the object has its own missing image. This is my first take: EDIT to add full models: class Service < ActiveRecord::Base attr_accessible :logo, :logo_file_name, :logo_content_type, :logo_file_size, :logo_updated_at belongs_to :category, :counter_cache => true has_attached_file :logo, :path => "/:id-:style-:filename", :url => ":s3_eu_url", :default_url => "/logos/:style/#{self.category.name]}.png", :styles => { :large => "600x400>", :medium => "300x200>", :small => "100x75>", :thumb => "60x42>" } end class Category < ActiveRecord::Base attr_accessible nil has_many :services end In my view, image_tag service.logo.url(:thumb) outputs: undefined method `category' for #<Class:0x0000010a731620> Any ideas? EDIT2: A working default_url is :default_url => "/logos/:style/missing.png", SOLUTION: See my own answer below.

    Read the article

  • How to get Amazon s3 PHP SDK working?

    - by JakeRow123
    I'm trying to set up s3 for the first time and trying to run the sample file that comes with the PHP sdk that creates a bucket and attempts to upload some demo files to it. But this is the error I am getting: The difference between the request time and the current time is too large. I read on another question on SO that this is because Amazon determines a valid request by comparing the times between the server and the client, that the 2 must be within a 15 min span of one another. Now here is the problem. My laptop's time is 12:30AM June 8, 2012 at the moment. On my server I created a file called servertime.php and placed this code in that file: <?php print strftime('%c'); ?> and the output is: Fri Jun 8 00:31:22 2012 It looks like the day is correct but I don't know what to make of 00:31:22. In any case, how is it possible to always make sure the time between the client and server is within a 15 minute window of one another. What if I have a user in China who wishes to upload a file on my site which uses s3 for the cdn. Then the time difference would be over a day. How can I make sure all my user's times are within 15 minutes of my server time? What if the user is in the U.S. but the time on their machine is misconfigured. Basically how to get s3 bucket creation and upload to work?

    Read the article

  • Model class for NSDictionary information with Lazy Loading

    - by samfu_1
    My application utilizes approx. 50+ .plists that are used as NSDictionaries. Several of my view controllers need access to the properties of the dictionaries, so instead of writing duplicate code to retrieve the .plist, convert the values to a dictionary, etc, each time I need the info, I thought a model class to hold the data and supply information would be appropriate. The application isn't very large, but it does handle a good deal of data. I'm not as skilled in writing model classes that conform to the MVC paradigm, and I'm looking for some strategies for this implementation that also supports lazy loading.. This model class should serve to supply data to any view controller that needs it and perform operations on the data (such as adding entries to dictionaries) when requested by the controller functions currently planned: returning the count on any dictionary adding one or more dictionaries together Currently, I have this method for supporting the count lookup for any dictionary. Would this be an example of lazy loading? -(NSInteger)countForDictionary: (NSString *)nameOfDictionary { NSBundle *bundle = [NSBundle mainBundle]; NSString *plistPath = [bundle pathForResource: nameOfDictionary ofType: @"plist"]; //load plist into dictionary NSMutableDictionary *dictionary = [[NSMutableDictionary alloc] initWithContentsOfFile: plistPath]; NSInteger count = [dictionary count] [dictionary release]; [return count] }

    Read the article

  • What goes into main function?

    - by Woltan
    I am looking for a best practice tip of what goes into the main function of a program using c++. Currently I think two approaches are possible. (Although the "margins" of those approaches can be arbitrarily close to each other) 1: Write a "Master"-class that receives the parameters passed to the main function and handle the complete program in that "Master"-class (Of course you also make use of other classes). Therefore the main function would be reduced to a minimum of lines. #include "MasterClass.h" int main(int args, char* argv[]) { MasterClass MC(args, argv); } 2: Write the "complete" program in the main function making use of user defined objects of course! However there are also global functions involved and the main function can get somewhat large. I am looking for some general guidelines of how to write the main function of a program in c++. I came across this issue by trying to write some unit test for the first approach, which is a little difficult since most of the methods are private. Thx in advance for any help, suggestion, link, ...

    Read the article

  • #include - brackets vs quotes in XCode?

    - by Chris Becke
    In MSVC++ #include files are searched for differently depending on whether the file is enclosed in "" or <. The quoted form searches first in the local folder, then in /I specified locations, The angle bracket form avoids the local folder. This means, in MSVC++, its possible to have header files with the same name as runtime and SDK headers. So, for example, I need to wrap up the windows sdk windows.h file to undefine some macro's that cause trouble. With MSVS I can just add a (optional) windows.h file to my project as long as I include it using the quoted form :- // some .cpp file #include "windows.h" // will include my local windows.h file And in my windows.h, I can pull in the real one using the angle bracket form: // my windows.h #include <windows.h> // will load the real one #undef ConflictingSymbol Trying this trick with GCC in XCode didn't work. angle bracket #includes in system header files in fact are finding my header files with similar names in my local folder structure. The MSVC system means its quite safe to have a "String.h" header file in my own folder structre. On XCode this seems to be a major no no. Is there some way to control this search path behaviour in XCode to be more like MSVC's? Or do I just have to avoid naming any of my headers anything that might possibly conflict with a system header. Writing cross platform code and using lots of frameworks means the possibility of incidental conflicts seems large.

    Read the article

  • Redirect requests only if the file is not found?

    - by ZenBlender
    I'm hoping there is a way to do this with mod_rewrite and Apache, but maybe there is another way to consider too. On my site, I have directories set up for re-skinned versions of the site for clients. If the web root is /home/blah/www, a client directory would be /home/blah/www/clients/abc. When you access the client directory via a web browser, I want it to use any requested files in the client directory if they exist. Otherwise, I want it to use the file in the web root. For example, let's say the client does not need their own index.html. Therefore, some code would determine that there is no index.html in /home/blah/www/clients/abc and will instead use the one in /home/blah/www. Keep in mind that I don't want to redirect the client to the web root at any time, I just want to use the web root's file with that name if the client directory has not specified its own copy. The web browser should still point to /clients/abc whether the file exists there or in the root. Likewise, if there is a request for news.html in the client directory and it DOES exist there, then just serve that file instead of the web root's news.html. The user's experience should be seamless. I need this to work for requests on any filename. If I need to, for example, add a new line to .htaccess for every file I might want to redirect, it rather defeats the purpose as there is too much maintenance needed, and a good chance for errors given the large number of files. In your examples, please indicate whether your code goes in the .htaccess file in the client directory, or the web root. Web root is preferred. Thanks for any suggestions! :)

    Read the article

  • Combining Java hashcodes into a "master" hashcode

    - by Nick Wiggill
    I have a vector class with hashCode() implemented. It wasn't written by me, but uses 2 prime numbers by which to multiply the 2 vector components before XORing them. Here it is: /*class Vector2f*/ ... public int hashCode() { return 997 * ((int)x) ^ 991 * ((int)y); //large primes! } ...As this is from an established Java library, I know that it works just fine. Then I have a Boundary class, which holds 2 vectors, "start" and "end" (representing the endpoints of a line). The values of these 2 vectors are what characterize the boundary. /*class Boundary*/ ... public int hashCode() { return 1013 * (start.hashCode()) ^ 1009 * (end.hashCode()); } Here I have attempted to create a good hashCode() for the unique 2-tuple of vectors (start & end) constituting this boundary. My question: Is this hashCode() implementation going to work? (Note that I have used 2 different prime numbers in the latter hashCode() implementation; I don't know if this is necessary but better to be safe than sorry when trying to avoid common factors, I guess -- since I presume this is why primes are popular for hashing functions.)

    Read the article

  • Uncommitted reads in SSIS

    - by OldBoy
    I'm trying to debug some legacy Integration Services code, and really want some confirmation on what I think the problem is: We have a very large data task inside a control flow container. This control flow container is set up with TransactionOption = supported - i.e. it will 'inherit' transactions from parent containers, but none are set up here. Inside the data flow there is a call to a stored proc that writes to a table with pseudo code something like: "If a record doesn't exist that matches these parameters then write it" Now, the issue is that there are three records being passed into this proc all with the same parameters, so logically the first record doesn't find a match and a record is created. The second record (with the same parameters) also doesn't find a match and another record is created. My understanding is that the first 'record' passed to the proc in the dataflow is uncommitted and therefore can't be 'read' by the second call. The upshot being that all three records create a row, when logically only the first should. In this scenario am I right in thinking that it is the uncommitted transaction that stops the second call from seeing the first? Even setting the isolation level on the container doesn't help because it's not being wrapped in a transaction anyway.... Hope that makes sense, and any advice gratefully received. Work-arounds confer god-like status on you.

    Read the article

  • Does .Net use Device Dependent or Device Independent Bitmaps?

    - by Brian
    When loading an image into memory, does .Net use DDB, DIB, or something else entirely? If possible, please cite your sources. I'm wondering because we currently have a classic ASP application that is using a 3rd party component to load images that is occasionally creating a “Not enough storage is available to process this command.” error. The error is very inconsistent but tends to happen on larger images (not always, but often). After resetting IIS, processing the same file again typically works just fine. After much research I have found that DDBs tend to have this problem when processing large images because they work out of video memory. Considering that we are running on a web server with an integrated video card and limited shared memory, this could certainly be our problem. We are in the early stages of converting our app to .Net and am wondering if using .Net for this might be a viable alternative to our current method which is why I am asking the question. Any advice is welcome :) but out of curiosity if nothing else, I am really hoping for an answer to the question; does .Net use DDB or DIB?

    Read the article

  • Visual Studio 2008 closes unexpectedly

    - by Jose
    I don't know if I can really get an answer to this question, but it really irks me and I would like to know if someone has an idea how to arrive to an answer. I have a pretty large solution in VS 2008 that maybe every week/every other week whenever I click properties to get to the project properties the IDE closes without warning. After that happens it will close EVERY time I try and view the properties. At that point I try and delete the .suo file, I resize the IDE, I close the tabs within the project, I restore default VS Settings(when I'm desperate). Eventually 20-30 minutes later I can actually view the properties. I haven't figured out exactly what fixes it, seems to be different every time. Once it's "fixed" I can't break it again so I can figure out what "fixed" it. This seems to be project specific, because I can view properties of other projects while this project is misbehaving. I guess my first question is, does VS log reasons for closing unexpectedly? Can I find out what the offending reason behind this is? The main frustration is I don't know that cause, nor the cure. Any ideas?

    Read the article

  • Excel - Best Way to Connect With Access Data

    - by gamerzfuse
    Hello there, Here is the situation we have: a) I have an Access database / application that records a significant amount of data. Significant fields would be hours, # of sales, # of unreturned calls, etc b) I have an Excel document that connects to the Access database and pulls data in to visualize it As it stands now, the Excel file has a Refresh button that loads new data. The data is loaded into a large PivotTable. The main 'visual form' then uses VLOOKUP to get the results from the form, based on the related hours. This operation is slow (~10 seconds) and seems to be redundant and inefficient. Is there a better way to do this? I am willing to go just about any route - just need directions. Thanks in advance! Update: I have confirmed (due to helpful comments/responses) that the problem is with the data loading itself. removing all the VLOOKUPs only took a second or two out of the load time. So, the questions stands as how I can rapidly and reliably get the data without so much time involvement (it loads around 3000 records into the PivotTables).

    Read the article

  • declarative_authorization permissions on roles

    - by William
    Hey all, I'm trying to add authorization to a rather large app that already exists, but I have to obfuscate the details a bit. Here's the background: In our app we have a number or roles that are hierarchical, roughly like this: BasicUser -> SuperUser -> Admin -> SuperAdmin For authorization each User model instance has an attribute 'role' which corresponds to the above. We have a RESTful controller "Users" that is namespaced under Backoffice. So in short it's Backoffice::UsersController. class Backoffice::UsersController < ApplicationController filter_access_to :all #... RESTful actions + some others end So here's the problem: We want users to be able to give permissions for users to edit users but ONLY if they have a 'smaller' role than they currently have. I've created the following in authorization_rules.rb authorization do role :basic_user do has_permission_on :backoffice_users, :to => :index end role :super_user do includes :basic_user has_permission_on :backoffice_users, :to => :edit do if_attribute :role => is_in { %w(basic_user) } end end role :admin do includes :super_user end role :super_admin do includes :admin end end And unfortunately that's as far as I got, the rule doesn't seem to get applied. If I comment the rule out, nobody can edit If I leave the rule in you can edit everybody I've also tried a couple of variations on the if_attribute: if_attribute :role => is { 'basic_user' } if_attribute :role => 'basic_user' and they get the same effect. Does anybody have any suggestions?

    Read the article

  • Strategies for Synchronizing Data Between a Rails App and iPhone App

    - by jessecurry
    I've written many iPhone Applications that have pulled data from web services and I've worked on synchronizing data between an iPhone App and a Web Application, but I've always felt that there is probably a better way to handle the synchronization. I'd like to know what strategies you have used to synchronize data between your iPhone(read: mobile) Apps and your Rails(read: web) Applications. Are there any strategies that scale particularly well? How have you dealt with large amounts of data? (Do you use paged responses?) How do you make sure that data is not overwritten? Is there a reason to avoid Ruby on Rails? if so, can you suggest an alternative? What is better about the alternative? What strategies have failed? Why do you believe that those strategies failed? I would like to be able to keep all of the data modifications on the server, but the particular application I am about to start work on will need the ability to operate while disconnected from the network. The user will be able to update data on the mobile device and update data through the web application. When the user's mobile device connects to the server any local changes will be pushed to the server.

    Read the article

  • Working with a list, performing arithmetic logic in Python

    - by haea ohoh
    Suppose I have made a large list of numbers, and I want to make another one which I will add, pairwise, with the first list. Here's the first list, A: [109, 77, 57, 34, 94, 68, 96, 72, 39, 67, 49, 71, 121, 89, 61, 84, 45, 40, 104, 68, 54, 60, 68, 62, 91, 45, 41, 118, 44, 35, 53, 86, 41, 63, 111, 112, 54, 34, 52, 72, 111, 113, 47, 91, 107, 114, 105, 91, 57, 86, 32, 109, 84, 85, 114, 48, 105, 109, 68, 57, 78, 111, 64, 55, 97, 85, 40, 100, 74, 34, 94, 78, 57, 77, 94, 46, 95, 60, 42, 44, 68, 89, 113, 66, 112, 60, 40, 110, 89, 105, 113, 90, 73, 44, 39, 55, 108, 110, 64, 108] And here's B: [35, 106, 55, 61, 81, 109, 82, 85, 71, 55, 59, 38, 112, 92, 59, 37, 46, 55, 89, 63, 73, 119, 70, 76, 100, 49, 117, 77, 37, 62, 65, 115, 93, 34, 107, 102, 91, 58, 82, 119, 75, 117, 34, 112, 121, 58, 79, 69, 68, 72, 110, 43, 111, 51, 102, 39, 52, 62, 75, 118, 62, 46, 74, 77, 82, 81, 36, 87, 80, 56, 47, 41, 92, 102, 101, 66, 109, 108, 97, 49, 72, 74, 93, 114, 55, 116, 66, 93, 56, 56, 93, 99, 96, 115, 93, 111, 57, 105, 35, 99] How might I generate the arithmatic addition logic, processing each pairwise value one by one (A[0] and B[0], through A[99], B[99]) and producing the list C (A[0] + B[0] through A[99]+ B[99])?

    Read the article

  • extra new lines with several outputStream.write

    - by Sam
    Hi All, I am writing jsp to export data in excel format to user. An excel could be recieved on the cient side. However, since there's large amount of data, and I don't want to keep it in the server memory and write them at the end. I try to divide them and write serveral times. However, each extra write(..) will cause an extra new lines at the top of the excel worksheet and then the extra data is placed after these new lines. Does anyone know the reasons? The code is something like this: response.setHeader("Content-disposition","attachment;filename=DocuShareSearch.xls"); response.setHeader("Content-Type", "application/octet-stream"); responseContent ="<table><tr><td>12131</td></tr>......."; byte[] responseByte1 = responseContent.getBytes("utf-16"); outputStream.write(responseByte1, 0, responseByte1.length ); responseContent =".....<tr><td>12131</td></tr></table>"; byte[] responseByte2 = responseContent.getBytes("utf-16"); outputStream.write(responseByte2, 0, responseByte2.length ); outputStream.close();

    Read the article

  • How to implement a log window in a web browser?

    - by Jeremy Friesner
    Hi all, I'm interested in adding an HTML/web-browser based "log window" to my net-enabled device. Specifically, my device has a customized web server and an event log, and I'd like to be able to leave a web browser window open to e.g. http://my.devices.ip.address/system_log and have events show up as text in the web browser window as they happen. People could then use this as a quick way to monitor what the system is doing, without needing run any special software. My question is, what is the best way to implement this? I've tried the obvious approach -- just have my device's embedded web server hold the HTTP/TCP connection open indefinitely, and write the necessary text to the TCP socket when an event occurs -- but the problem with that is that most web browsers (e.g. Safari) don't display the web page until the server has closed the TCP connection has been closed, and so the result is that the log data never appears in the web browser, it just acts as if the page is taking forever to load. Is there some trick to make this work? I could implement it as a Java applet, but I'd much prefer something more lightweight/simple, either using only HTML or possibly HTML+JavaScript. Also I'd like to avoid having the web browser 'poll' the server, since that would either introduce too much latency (if the reload delay was large) or put load on the system (if the delay was small)

    Read the article

  • Move SELECT to SQL Server side

    - by noober
    Hello all, I have an SQLCLR trigger. It contains a large and messy SELECT inside, with parts like: (CASE WHEN EXISTS(SELECT * FROM INSERTED I WHERE I.ID = R.ID) THEN '1' ELSE '0' END) AS IsUpdated -- Is selected row just added? as well as JOINs etc. I like to have the result as a single table with all included. Question 1. Can I move this SELECT to SQL Server side? If yes, how to do this? Saying "move", I mean to create a stored procedure or something else that can be executed before reading dataset in while cycle. The 2 following questions make sense only if answer is "yes". Why do I want to move SELECT? First off, I don't like mixing SQL with C# code. At second, I suppose that server-side queries run faster, since the server have more chances to cache them. Question 2. Am I right? Is it some sort of optimizing? Also, the SELECT contains constant strings, but they are localizable. For instance, WHERE R.Status = "Enabled" "Enabled" should be changed for French, German etc. So, I want to write 2 static methods -- OnCreate and OnDestroy -- then mark them as stored procedures. When registering/unregistering my assembly on server side, just call them respectively. In OnCreate format the SELECT string, replacing {0}, {1}... with required values from the assembly resources. Then I can localize resources only, not every script. Question 3. Is it good idea? Is there an existing attribute to mark methods to be executed by SQL Server automatically after (un)registartion an assembly? Regards,

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >