Search Results

Search found 4835 results on 194 pages for 'svn dump'.

Page 170/194 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • php parsing speed optimization

    - by Arnaud
    I would like to add tooltip or generate link according to the element available in the database, for exemple if the html page printed is: to reboot your linux host in single-user mode you can ... I will use explode(" ", $row[page]) and the idea is now to lookup for every single word in the page to find out if they have a related referance in this exemple let's say i've got a table referance an one entry for reboot and one for linux reboot: restart a computeur linux: operating system now my output will look like (replaced < and by @) to @a href="ref/reboot"@reboot@/a@ your @a href="ref/linux"@linux@/a@ host in single-user mode you can ... Instead of have a static list generated when I saved the content, if I add more keyword in the future, then the text will become more interactive. My main concerne and question is how can I create a efficient enough process to do it ? Should I store all the db entry in an array and compare them ? Do an sql query for each word (seems to be crazy) Dump the table in a file and use a very long regex or a "grep -f pattern data" way of doing it? Or or or or I'm sure it must be a better way of doing it, just don't have a clue about it, or maybe this will be far too resource un-friendly and I should avoid doing such things. Cheers!

    Read the article

  • Using nohup mysqldump from php script is inserting a '!' and breaking to a new line.

    - by Aglystas
    I'm trying to run a mysqldump from php using the nohup command to prevent the script from hanging. Here's the command (The database is mc6_erik_test, everything else is just a table list until you get to the end) exec("mysqldump -u root -pPassword -h vfmy1-dev.mountainmedia.com mc6_erik_test access_log admin affiliate affiliate_2_product authorized_ip category category_2_product claim_code claim_code_log country_exclude customer customer_2_subscription customer_account_log customer_address customer_bill customer_discount customer_ip customer_key email_bulk_log email_draft email_queue email_queue_log email_template endicia_log gift_wrap image_bulk_upload log mailing_list manufacturer merchant merchant_checkout merchant_ip merchant_ship merchant_ship_conf new_account_temp order_dest order_item order_item_2_dest order_item_2_package order_item_log order_item_registrant order_note order_package order_package_label orders package package_2_product pref product product_2_supplier product_also product_event_date product_image product_option product_related product_review product_review_helpful product_ship_disable report search_log subscription supplier temp_product transaction_account transactions wish_list wish_list_fill wish_list_item --opt --where='merchant_id=\'6\'' /tmp/sync_db_card_20100519105358.sql"); As you can see it's really long, because I have to specifically include only the tables I want to dump. The command works great from the command line, however when I run it through a web script towards the end the following is being used as the command... supplier temp_product transaction_account transactio! ns wish_list wish_list_fill wish_list_item --opt --where='merchant_id="6"' > /tmp/sync_db_card_20100519105358.sql So the table 'transactions' is being split by an exclamation point and newline. The rest of the command is exactly the same. And if I run this through the php-cli interface it doesn't happen only when I try running it via the webserver using nohup. I'm wondering if there is some inherit string length to using the exec command within a php script, or really if anyone has any general idea what is going on here.

    Read the article

  • Access Violation in std::pair

    - by sameer karjatkar
    I have an application which is trying to populate a pair. Out of nowhere the application crashes. The Windbg analysis on the crash dump suggests: PRIMARY_PROBLEM_CLASS: INVALID_POINTER_READ DEFAULT_BUCKET_ID: INVALID_POINTER_READ STACK_TEXT: 0389f1dc EPFilter32!std::vector<std::pair<unsigned int,unsigned int>,std::allocator<std::pair<unsigned int,unsigned int> > >::size+0xc INVALID_POINTER_READ_c0000005_Test.DLL!std::vector_std::pair_unsigned_int, unsigned_int_,std::allocator_std::pair_unsigned_int,unsigned_int_____::size Following is the code snap in the code where it fails: for (unsigned i1 = 0; i1 < size1; ++i1) { for (unsigned i2 = 0; i2 < size2; ++i2) { const branch_info& b1 = en1.m_branches[i1]; //Exception here :crash const branch_info& b2 = en2.m_branches[i2]; } } where branch_info is std::pair<unsigned int,unsigned int> and the en1.m_branches[i1] fetches me a pair value.

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • DLL Exports: not all my functions are exported

    - by carmellose
    I'm trying to create a Windows DLL which exports a number of functions, howver all my functions are exported but one !! I can't figure it out. The macro I use is this simple one : __declspec(dllexport) void myfunction(); It works for all my functions except one. I've looked inside Dependency Walker and here they all are, except one. How can that be ? What would be the cause for that ? I'm stuck. Edit: to be more precise, here is the function in the .h : namespace my { namespace great { namespace namespaaace { __declspec(dllexport) void prob_dump(const char *filename, const double p[], int nx, const double Q[], const double xlow[], const char ixlow[], const double xupp[], const char ixupp[], const double A[], int my, const double bA[], const double C[], int mz, const double clow[], const char iclow[], const double cupp[], const char icupp[] ); }}} And in the .cpp file it goes like this: namespace my { namespace great { namespace namespaaace { namespace { void dump_mtx(std::ostream& ostr, const double *mtx, int rows, int cols, const char *ind = 0) { /* some random code there, nothing special, no statics whatsoever */ } } // end anonymous namespace here // dump the problem specification into a file void prob_dump( const char *filename, const double p[], int nx, const double Q[], const double xlow[], const char ixlow[], const double xupp[], const char ixupp[], const double A[], int my, const double bA[], const double C[], int mz, const double clow[], const char iclow[], const double cupp[], const char icupp[] ) { std::ofstream fout; fout.open(filename, std::ios::trunc); /* implementation there */ dump_mtx(fout, Q, nx, nx); } }}} Thanks

    Read the article

  • grailsui plugin for accordion not working

    - by inquirer
    Hi! I have been trying to figure out what's wrong with grailsui accordion. I have checked out the sample source files (http://guidemo.googlecode.com/svn/trunk/) from different ui tags, like menu, datatable and etc. they are working fine but the accordion is not working properly. All i can see is all of the divs shown. the panels are not moving since they are already shown. I've also followed the screencast for it. I've used the yui-skin-sam class name and I've downloaded the grails plugins.grails-ui=1.2-SNAPSHOT. I am not sure what's wrong.I used the rich-ui plugin and it is working great. But there are still features that are not in rich-ui compared to the possible configurations for the grails-ui tags. Anyway. if you have any suggestions to what js library I may use for a grails application or if you have a working grails-ui accordion sample, I would really appreciate it. Thanks in advance. Hope you can help me since I am still new to Grails.

    Read the article

  • Is it OK to use WPF assemblies in a web app?

    - by Chris
    I have an ASP.NET MVC 2 app targeting .NET 4 that needs to be able to resize images on the fly and write them to the response. I have code that does this and it works. I am using System.Drawing.dll. However, I want to enhance my code so that not only am I resizing the image, but I am dropping it from 24bpp down to 4bit grayscale. I could not, for the life of me, find code on how to do this with System.Drawing.dll. But I did find a bunch of WPF stuff. This is my working/sample code (runs in LinqPad). // Load the original 24 bit image var bitmapImage = new BitmapImage(); bitmapImage.BeginInit(); bitmapImage.UriSource = new Uri(@"C:\Temp\Resized\18_appa2_015.png", UriKind.Absolute); //bitmapImage.DecodePixelWidth = 600; bitmapImage.EndInit(); // Create the destination image var formatConvertedBitmap = new FormatConvertedBitmap(); formatConvertedBitmap.BeginInit(); formatConvertedBitmap.Source = bitmapImage; formatConvertedBitmap.DestinationFormat = PixelFormats.Gray4; formatConvertedBitmap.EndInit(); // Encode and dump the image to disk var encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(formatConvertedBitmap)); using (var fileStream = File.Create(@"C:\Temp\Resized\18_appa2_015_s2.png")) { encoder.Save(fileStream); } It uses System.Xaml.dll, WindowsBase.dll, PresentationCore.dll, and PresentationFramework.dll. The namespaces used are: System.Windows.Controls, System.Windows.Media, and System.Windows.Media.Imaging. Is there any problem using these namespaces in my web application? It doesn't seem right. If anyone knows how to drop the bit depth without all this WPF stuff (which I barely understand, BTW) I would be thrilled to see that too.

    Read the article

  • From where starts the process' memory space and where does it end?

    - by nhaa123
    Hi, I'm trying to dump memory from my application where the variables lye. Here's the function: void MyDump(const void *m, unsigned int n) { const unsigned char *p = reinterpret_cast<const unsigned char *(m); char buffer[16]; unsigned int mod = 0; for (unsigned int i = 0; i < n; ++i, ++mod) { if (mod % 16 == 0) { mod = 0; std::cout << " | "; for (unsigned short j = 0; j < 16; ++j) { switch (buffer[j]) { case 0xa: case 0xb: case 0xd: case 0xe: case 0xf: std::cout << " "; break; default: std::cout << buffer[j]; } } std::cout << "\n0x" << std::setfill('0') << std::setw(8) << std::hex << (long)i << " | "; } buffer[i % 16] = p[i]; std::cout << std::setw(2) << std::hex << static_cast<unsigned int(p[i]) << " "; if (i % 4 == 0 && i != 1) std::cout << " "; } } Now, how can I know from which address starts my process memory space, where all the variables are stored? And how do I now, how long the area is? For instance: MyDump(0x0000 /* <-- Starts from here? */, 0x1000 /* <-- This much? */); Best regards, nhaa123

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • CLI include paths to run zend framework via cron

    - by summerg
    I wrote a command line utility using Zend Framework to do some nightly reporting. It uses a ton of the same functionality the accompanying site. It works great when I run it by hand, but when I run it on cron I have include path issues. Seems like it should be easily fixed with set_include_path, but maybe I'm missing something? My directory structure looks like this: /var/www/clientname/ application Globals.php commandline commandline_bootstrap.php public_html public_bootstrap.php library Zend In public_bootstrap.php I use set_include_path without a problem, relative to the current directory: set_include_path('../library' . PATH_SEPARATOR . get_include_path()); If I understand correctly, in commandline_bootstrap.php I need to put in the absolute path, so cron knows where everything is. My file starts like this: error_reporting(E_ALL); set_include_path('/var/www/clientname/library' . PATH_SEPARATOR . get_include_path()); require_once "../application/Globals.php"; But when I run it via cron I get the following error: PHP Fatal error: require_once(): Failed opening required '../application/Globals.php' (include_path='/var/www/clientname/library/') in /var/www/clientname/commandline/zfcli.php on line 11 I think PHP is accepting my new path, because when I run it command line and dump the phpinfo I can see: include_path = /var/www/clientname/library/:.:/usr/share/pear:/usr/share/php = .:/usr/share/pear:/usr/share/php I admit the syntax here looks a little strange, but I can’t figure out how to fix it. Any suggestions would be greatly appreciated. Thanks summer

    Read the article

  • CakePHP: Missing database table

    - by Justin
    I have a CakePHP application that is running fine locally. I uploaded it to a production server and the first page that uses a database connection gives the "Missing Database Table" error. When I look at the controller dump, it's complaining about the first table. I've tried a variety of things to fix this problem, with no luck: I've confirmed that at the command line I can login with the given MySQL credentials in database.php I've confirmed this table exists I've tried using the MySQL root credentials (temporarily) to see if the problem lies with permissions of the user. The same error appeared. My debug level is currently set to 3 I've deleted the entire contents of /app/tmp/cache I've set 777 permissions on /app/tmp* I've confirmed that I can run DESCRIBE commands at the commant line MySQL when logged in with the MySQL credentials used by by the application I've verified that the CakePHP log file only contains the error I'm setting in the browser window. I've tried all the suggestions I could find in similar postings on SO I've Googled around and didn't find any other ideas I think I've eliminating the obvious problems and my research isn't turning anything up. I feel like I'm missing something obvious. Any ideas?

    Read the article

  • [WEB] Local/Dev/Live deployment - best workflow

    - by Adam Kiss
    Hello, situation We our little company with 3 people, each has a localhost webserver and most projects (previous and current) are on one PC network shared disk. We have virtual server, where some of our clients' sites and our site. Our standard workflow is: Coder PC ? Programmer localhost ? dev domain (client.company.com) ? live version (client.com) It often happens, that there are two or three guys working on same projects at the same time - one is on dev version, two are on localhost. When finished, we try to synchronize the files on dev version and ideally not to mess (thanks ILMV:]) up any files, which *knock knock * doesn't happen often. And then one of us deploys dev version on live webserver. question we are looking for a way to simplify this workflow while updating websites - ideally some sort of diff uploader or VCS probably (Git/SVN/VCS/...), but we are not completely sure where to begin or what way would be ideal, therefore I ask you, fellow stackoverflowers for your experience with website / application deployment and recommended workflow. We probably will also need to use Mac in process, so if it won't be a problem, that would be even better. Thank you

    Read the article

  • Storing arbitrary data in HTML

    - by Rob Colburn
    What is the best way to embed data in html elements for later use? As an example, let's say we have jQuery returning some JSON from the server, and we want to dump that datat out to the user as paragraphs. However, we want to be able to attach meta-data to these elements, so we can events for these later. The way I tend to handle this, is with some ugly prefixing function handle_response(data) { var html = ''; for (var i in data) { html += '<p id="prefix_' + data[i].id + '">' + data[i].message + '</p>'; } jQuery('#log').html(html).find('p').click(function(){ alert('The ID is: ' + $(this).attr('id').substr(7)); }); } Alternatively, one can build a Form in the paragraph, and store your meta-data there. But, that often feels like overkill. This has been asked before in different ways, but I do not feel it's been answered well: http://stackoverflow.com/questions/432174/how-to-store-arbitrary-data-for-some-html-tags http://stackoverflow.com/questions/209428/non-standard-attributes-on-html-tags-good-thing-bad-thing-your-thoughts

    Read the article

  • Full complete MySQL database replication? Ideas? What do people do?

    - by mauriciopastrana
    Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate). The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data. Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). I am just wondering what people out there do. The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.

    Read the article

  • C89, Mixing Variable Declarations and Code

    - by rutski
    I'm very curious to know why exactly C89 compilers will dump on you when you try to mix variable declarations and code, like this for example: rutski@imac:~$ cat test.c #include <stdio.h> int main(void) { printf("Hello World!\n"); int x = 7; printf("%d!\n", x); return 0; } rutski@imac:~$ gcc -std=c89 -pedantic test.c test.c: In function ‘main’: test.c:7: warning: ISO C90 forbids mixed declarations and code rutski@imac:~$ Yes, you can avoid this sort of thing by staying away from -pedantic. But then your code is no longer standards compliant. And as anybody capable of answering this post probably already knows, this is not just a theoretical concern. Platforms like Microsoft's C compiler enforce this quick in the standard under any and all circumstances. Given how ancient C is, I would imagine that this feature is due to some historical issue dating back to the extraordinary hardware limitations of the 70's, but I don't know the details. Or am I totally wrong there?

    Read the article

  • How to restrict a content of string to less than 4MB and save that string in DB using C#

    - by Pranay B
    I'm working on a project where I need to get the Text data from pdf files and dump the whole text in a DB column. With the help of iTextsharp, I got the data and referred it String. But now I need to check whether the string exceeds the 4MB limit or not and if it is exceeding then accept the string data which is less than 4MB in size. This is my code: internal string ReadPdfFiles() { // variable to store file path string filePath = null; // open dialog box to select file OpenFileDialog file = new OpenFileDialog(); // dilog box title name file.Title = "Select Pdf File"; //files to be accepted by the user. file.Filter = "Pdf file (*.pdf)|*.pdf|All files (*.*)|*.*"; // set initial directory of computer system file.InitialDirectory = Environment.GetFolderPath(Environment.SpecialFolder.Desktop); // set restore directory file.RestoreDirectory = true; // execute if block when dialog result box click ok button if (file.ShowDialog() == DialogResult.OK) { // store selected file path filePath = file.FileName.ToString(); } //file path /// use a string array and pass all the pdf for searching //String filePath = @"D:\Pranay\Documentation\Working on SSAS.pdf"; try { //creating an instance of PdfReader class using (PdfReader reader = new PdfReader(filePath)) { //creating an instance of StringBuilder class StringBuilder text = new StringBuilder(); //use loop to specify how many pages to read. //I started from 5th page as Piyush told for (int i = 5; i <= reader.NumberOfPages; i++) { //Read the pdf text.Append(PdfTextExtractor.GetTextFromPage(reader, i)); }//end of for(i) int k = 4096000; //Test whether the string exceeds the 4MB if (text.Length < k) { //return the string text1 = text.ToString(); } //end of if } //end of using } //end try catch (Exception ex) { MessageBox.Show(ex.Message, "Please Do select a pdf file!!", MessageBoxButtons.OK, MessageBoxIcon.Warning); } //end of catch return text1; } //end of ReadPdfFiles() method Do help me!

    Read the article

  • How much detail should be in a project plan or spec?

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

  • how could application installations/configurations be easier in linux? [closed]

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • Ruby -- looking for some sort of "Regexp unescape" method

    - by RubyNoobie
    I have a bunch of strings that appear to have been double-escaped -- eg, I have "\\014\"\\000\"\\016smoothing\"\\011mean\"\\022color\"\\011zero@\\016" but I want "\014"\000"\016smoothing"\011mean"\022color"\011zero@\016" Is there a method I can use to unescape them? I imagine that I could make a regex to remove 1 backslash from every consecutive n backslashes, but I don't have a lot of regex experience and it seems there ought to be a "more elegant" way to do it. For example, when I puts MyString it displays the output I'd like, but I don't know how I might capture that into a variable. Thanks! Edited to add context: I have this class that is being used to marshal / restore some stuff, but when I restore some old strings it spits out a type error which I've determined is because they weren't -- for some inexplicable reason -- stored as base64. They instead appear to be 'double-escaped', when I need them to be 'single-escaped' to get restored. require 'base64' class MarshaledStuff < ActiveRecord::Base validates_presence_of :marshaled_obj def contents obj = self.marshaled_obj return Marshal.restore(Base64.decode64(obj)) end def contents=(newcontents) self.marshaled_obj = Base64.encode64(Marshal.dump(newcontents)) end end

    Read the article

  • reading non-english html pages with c#

    - by Gal Miller
    I am trying to find a string in Hebrew in a website. The reading code is attached. Afterward I try to read the file using streamReader but I can't match strings in other languages. what am I suppose to do? // used on each read operation byte[] buf = new byte[8192]; // prepare the web page we will be asking for HttpWebRequest request = (HttpWebRequest) WebRequest.Create("http://www.webPage.co.il"); // execute the request HttpWebResponse response = (HttpWebResponse) request.GetResponse(); // we will read data via the response stream Stream resStream = response.GetResponseStream(); string tempString = null; int count = 0; FileStream fileDump = new FileStream(@"c:\dump.txt", FileMode.Create); do { count = resStream.Read(buf, 0, buf.Length); fileDump.Write(buf, 0, buf.Length); } while (count > 0); // any more data to read? fileDump.Close();

    Read the article

  • Differences between Assembly Code output of the same program

    - by ultrajohn
    I have been trying to replicate the buffer overflow example3 from this article aleph one I'm doing this as a practice for a project in a computer security course i'm taking so please, I badly need your help. I've been the following the example, performing the tasks as I go along. My problem is the assembly code dumped by gdb in my computer (i'm doing this on a debian linux image running on VM Ware) is different from that of the example in the article. There are some constructs which I find confusing. Here is the one from my computer: here is the one from the article... Dump of assembler code for function main: 0x8000490 <main>: pushl %ebp 0x8000491 <main+1>: movl %esp,%ebp 0x8000493 <main+3>: subl $0x4,%esp 0x8000496 <main+6>: movl $0x0,0xfffffffc(%ebp) 0x800049d <main+13>: pushl $0x3 0x800049f <main+15>: pushl $0x2 0x80004a1 <main+17>: pushl $0x1 0x80004a3 <main+19>: call 0x8000470 <function> 0x80004a8 <main+24>: addl $0xc,%esp 0x80004ab <main+27>: movl $0x1,0xfffffffc(%ebp) 0x80004b2 <main+34>: movl 0xfffffffc(%ebp),%eax 0x80004b5 <main+37>: pushl %eax 0x80004b6 <main+38>: pushl $0x80004f8 0x80004bb <main+43>: call 0x8000378 <printf> 0x80004c0 <main+48>: addl $0x8,%esp 0x80004c3 <main+51>: movl %ebp,%esp 0x80004c5 <main+53>: popl %ebp 0x80004c6 <main+54>: ret 0x80004c7 <main+55>: nop As you can see, there are differences between the two. I'm confuse and I can't understand totally the assembly code from my computer. I would like to know the differences between the two. How is pushl different from push, mov vs movl , and so on... what does the expression 0xhexavalue(%register) means? I am sorry If I'm asking a lot, But I badly need your help. Thanks for the help really...

    Read the article

  • How do I manage dependencies for automated builds on my build server?

    - by Tom Pickles
    I'm trying to implement continuous integration into our day to day workings. In our team, we're moving from just building our code in Visual Studio on our workstations and deploying, to using MSBuild.exe and automating on our build server (which is Jenkins) without the use of Visual Studio. We have external dependencies to references such as Automap in our projects. Because the automap (for example) dll isn't on the build server, the msbuild execution fails, for obvious reasons. There are other dll's which I need to be part of the build, I'm just using automap as an example. So what's the best way to get any dependencies onto the build server as part of the automated build? I've seen references to using a 'lib' folder, but I don't really understand where I should be putting it (in my project, filesystem, SVN ...?), and how the build server will get to it. I've also read that NuGet can do something with dependencies, but my build server isn't connected to the internet, and I don't understand how I can get my build to pull a NuGet package I may have created, and how it works together. Edit: I'm using subversion and we cannot use TeamCity as we would have to buy it and there's zero chance of funding.

    Read the article

  • adding Json to mutable array resolves in crash

    - by user2957713
    Hello guys I am new to Xcode/iOS developing I trying to add json data to the mutable array , and it results in app crash :( so far here is my code: if(! [defaults objectForKey:@"Person1"]) [defaults setObject:[PersonsFromSearch objectAtIndex:index] forKey:@"Person1"]; else { NSMutableArray *Array = [[NSMutableArray alloc]init]; id object = [defaults objectForKey:@"Person1"]; Array = [object isKindOfClass:[NSArray class]] ? object : @[object]; [Array addObject:[PersonsFromSearch objectAtIndex:index]];//crash here :(( [Array moveObjectFromIndex:[Array count] toIndex:0]; } Crash Dump: * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[__NSCFDictionary addObject:]: unrecognized selector sent to instance 0xbf98dc0' what is wrong here ? can you please help me to resolve this issue Array contains this (Json?) { Address = "\U05d3\U05e8\U05da \U05d4\U05e9\U05dc\U05d5\U05dd 53"; CellPhone = "052-3275381"; EMail = "[email protected]"; EnglishPerson = "Yehuda Konfortes"; FaceBookLink = ""; Fax1 = "03-7330703"; Fax2 = ""; FileNAme = "100050.jpg"; HomeEMail = ""; HomeFax = ""; HomePhone1 = ""; HomePhone2 = ""; PersonID = 100050; PersonName = "\U05d9\U05d4\U05d5\U05d3\U05d4 \U05e7\U05d5\U05e0\U05e4\U05d5\U05e8\U05d8\U05e1"; Phone1 = "03-7330733"; Phone2 = ""; ZipCode = ""; }

    Read the article

  • Boost binding a function taking a reference

    - by Jamie Cook
    Hi all, I am having problems compiling the following snippet int temp; vector<int> origins; vector<string> originTokens = OTUtils::tokenize(buffer, ","); // buffer is a char[] array // original loop BOOST_FOREACH(string s, originTokens) { from_string(temp, s); origins.push_back(temp); } // I'd like to use this to replace the above loop std::transform(originTokens.begin(), originTokens.end(), origins.begin(), boost::bind<int>(&FromString<int>, boost::ref(temp), _1)); where the function in question is // the third parameter should be one of std::hex, std::dec or std::oct template <class T> bool FromString(T& t, const std::string& s, std::ios_base& (*f)(std::ios_base&) = std::dec) { std::istringstream iss(s); return !(iss >> f >> t).fail(); } the error I get is 1>Compiling with Intel(R) C++ 11.0.074 [IA-32]... (Intel C++ Environment) 1>C:\projects\svn\bdk\Source\deps\boost_1_42_0\boost/bind/bind.hpp(303): internal error: assertion failed: copy_default_arg_expr: rout NULL, no error (shared/edgcpfe/il.c, line 13919) 1> 1> return unwrapper<F>::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_]); 1> ^ 1> 1>icl: error #10298: problem during post processing of parallel object compilation Google is being unusually unhelpful so I hope that some one here can provide some insights.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >