Search Results

Search found 26912 results on 1077 pages for 'default programs'.

Page 704/1077 | < Previous Page | 700 701 702 703 704 705 706 707 708 709 710 711  | Next Page >

  • C# need help debugging socks5-connection attemp

    - by Chuck
    Hi, I've written the following code to (successfully) connect to a socks5 proxy. I send a user/pw auth and get an OK reply (0x00), but as soon as I tell the proxy to connect to whichever ip:port, it gives me 0x01 (general error). Socket socket5_proxy = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPEndPoint proxyEndPoint = new IPEndPoint(IPAddress.Parse("111.111.111.111"), 1080); // proxy ip, port. fake for posting purposes. socket5_proxy.Connect(proxyEndPoint); byte[] init_socks_command = new byte[4]; init_socks_command[0] = 0x05; init_socks_command[1] = 0x02; init_socks_command[2] = 0x00; init_socks_command[3] = 0x02; socket5_proxy.Send(init_socks_command); byte[] socket_response = new byte[2]; int bytes_recieved = socket5_proxy.Receive(socket_response, 2, SocketFlags.None); if (socket_response[1] == 0x02) { byte[] temp_bytes; string socks5_user = "foo"; string socks5_pass = "bar"; byte[] auth_socks_command = new byte[3 + socks5_user.Length + socks5_pass.Length]; auth_socks_command[0] = 0x05; auth_socks_command[1] = Convert.ToByte(socks5_user.Length); temp_bytes = Encoding.Default.GetBytes(socks5_user); temp_bytes.CopyTo(auth_socks_command, 2); auth_socks_command[2 + socks5_user.Length] = Convert.ToByte(socks5_pass.Length); temp_bytes = Encoding.Default.GetBytes(socks5_pass); temp_bytes.CopyTo(auth_socks_command, 3 + socks5_user.Length); socket5_proxy.Send(auth_socks_command); socket5_proxy.Receive(socket_response, 2, SocketFlags.None); if (socket_response[1] != 0x00) return; byte[] connect_socks_command = new byte[10]; connect_socks_command[0] = 0x05; connect_socks_command[1] = 0x02; // streaming connect_socks_command[2] = 0x00; connect_socks_command[3] = 0x01; // ipv4 temp_bytes = IPAddress.Parse("222.222.222.222").GetAddressBytes(); // target connection. fake ip, obviously temp_bytes.CopyTo(connect_socks_command, 4); byte[] portBytes = BitConverter.GetBytes(8888); connect_socks_command[8] = portBytes[0]; connect_socks_command[9] = portBytes[1]; socket5_proxy.Send(connect_socks_command); socket5_proxy.Receive(socket_response); if (socket_response[1] != 0x00) MessageBox.Show("Damn it"); // I always end here, 0x01 I've used this as a reference: http://en.wikipedia.org/wiki/SOCKS#SOCKS_5 Have I completely misunderstood something here? How I see it, I can connect to the socks5 fine. I can authenticate fine. But I/the proxy can't "do" anything? Yes, I know the proxy works. Yes, the target ip is available and yes the target port is open/responsive. I get 0x01 no matter what I try to connect to. Any help is VERY MUCH appreciated! Thanks, Chuck

    Read the article

  • jquery Uncaught SyntaxError: Unexpected token )

    - by js00831
    I am getting a jquery error in my code... can you guys tell me whats the reason for this when you click the united states red color button in this link you will see this error http://jsfiddle.net/SSMX4/88/embedded/result/ function checkCookie(){ var cookie_locale = readCookie('desired-locale'); var show_blip_count = readCookie('show_blip_count'); var tesla_locale = 'en_US'; //default to US var path = window.location.pathname; // debug.log("path = " + path); var parsed_url = parseURL(window.location.href); var path_array = parsed_url.segments; var path_length = path_array.length var locale_path_index = -1; var locale_in_path = false; var locales = ['en_AT', 'en_AU', 'en_BE', 'en_CA', 'en_CH', 'de_DE', 'en_DK', 'en_GB', 'en_HK', 'en_EU', 'jp', 'nl_NL', 'en_US', 'it_IT', 'fr_FR', 'no_NO'] // see if we are on a locale path $.each(locales, function(index, value){ locale_path_index = $.inArray(value, path_array); if (locale_path_index != -1) { tesla_locale = value == 'jp' ? 'ja_JP':value; locale_in_path = true; } }); // debug.log('tesla_locale = ' + tesla_locale); cookie_locale = (cookie_locale == null || cookie_locale == 'null') ? false:cookie_locale; // Only do the js redirect on the static homepage. if ((path_length == 1) && (locale_in_path || path == '/')) { debug.log("path in redirect section = " + path); if (cookie_locale && (cookie_locale != tesla_locale)) { // debug.log('Redirecting to cookie_locale...'); var path_base = ''; switch (cookie_locale){ case 'en_US': path_base = path_length > 1 ? path_base:'/'; break; case 'ja_JP': path_base = '/jp' break; default: path_base = '/' + cookie_locale; } path_array = locale_in_path != -1 ? path_array.slice(locale_in_path):path_array; path_array.unshift(path_base); window.location.href = path_array.join('/'); } } // only do the ajax call if we don't have a cookie if (!cookie_locale) { // debug.log('doing the cookie check for locale...') cookie_locale = 'null'; var get_data = {cookie:cookie_locale, page:path, t_locale:tesla_locale}; var query_country_string = parsed_url.query != '' ? parsed_url.query.split('='):false; var query_country = query_country_string ? (query_country_string.slice(0,1) == '?country' ? query_country_string.slice(-1):false):false; if (query_country) { get_data.query_country = query_country; } $.ajax({ url:'/check_locale', data:get_data, cache: false, dataType: "json", success: function(data){ var ip_locale = data.locale; var market = data.market; var new_locale_link = $('#locale_pop #locale_link'); if (data.show_blip && show_blip_count < 3) { setTimeout(function(){ $('#locale_msg').text(data.locale_msg); $('#locale_welcome').text(data.locale_welcome); new_locale_link[0].href = data.new_path; new_locale_link.text(data.locale_link); new_locale_link.attr('rel', data.locale); if (!new_locale_link.hasClass(data.locale)) { new_locale_link.addClass(data.locale); } $('#locale_pop').slideDown('slow', function(){ var hide_blip = setTimeout(function(){ $('#locale_pop').slideUp('slow', function(){ var show_blip_count = readCookie('show_blip_count'); if (!show_blip_count) { createCookie('show_blip_count',1,360); } else if (show_blip_count < 3 ) { var b_count = show_blip_count; b_count ++; eraseCookie('show_blip_count'); createCookie('show_blip_count',b_count,360); } }); },10000); $('#locale_pop').hover(function(){ clearTimeout(hide_blip); },function(){ setTimeout(function(){$('#locale_pop').slideUp();},10000); }); }); },1000); } } }); } }

    Read the article

  • Asymptotic complexity of a compiler

    - by Meinersbur
    What is the maximal acceptable asymptotic runtime of a general-purpose compiler? For clarification: The complexity of compilation process itself, not of the compiled program. Depending on the program size, for instance, the number of source code characters, statements, variables, procedures, basic blocks, intermediate language instructions, assembler instructions, or whatever. This is highly depending on your point of view, so this is a community wiki. See this from the view of someone who writes a compiler. Will the optimisation level -O4 ever be used for larger programs when one of its optimisations takes O(n^6)? Related questions: When is superoptimisation (exponential complexity or even incomputable) acceptable? What is acceptable for JITs? Does it have to be linear? What is the complexity of established compilers? GCC? VC? Intel? Java? C#? Turbo Pascal? LCC? LLVM? (Reference?) If you do not know what asymptotic complexity is: How long are you willing to wait until the compiler compiled your project? (scripting languages excluded)

    Read the article

  • How can I build something like Amazon S3 in Perl?

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe REST (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Why do people have to use multiple versions of jQuery in the same page?

    - by reprogrammer
    I have noticed that sometimes people have to use multiple versions of jQuery in the same page (See question 1 and question 2). I assume people have to carry old versions of jQuery because some pieces of their code is based on an older version of jQuery. Obviously, this approach causes inefficiency. The ideal solution is to refactor the old code to use the newer jQuery API. I wonder if there are tools that automate the process of upgrading a piece of code to use a newer version of jQuery. I've never written programs in in either Javascript or jQuery. So, I'd like to hear from programmers experienced in these language about their opinion on this issue. In particular, I'd like to know the following. How much of problem it is to have to load multiple versions of jQuery? Have you ever had to load multiple versions of any other library in the same page? Do you know of any refactoring tools that helps you migrate your code to use the updated API? Do you think such a refactoring tool is useful? Are you willing to use it?

    Read the article

  • What file format can represent an uncompressed raster image at 48 or 64 bits per pixel?

    - by finnw
    I am creating screenshots under Windows and using the LockBits function from GDI+ to extract the pixel data, which will then be written to a file. To maximise performance I am also: Using the same PixelFormat as the source bitmap, to avoid format conversion Using the ImageLockModeUserInputBuf flag to extract the pixel data into a pre-allocated buffer This pre-allocated buffer (pointed to by BitmapData::Scan0) is part of a memory-mapped file (to avoid copying the pixel data again.) I will also be writing the code that reads the file, so I can use (or invent) any format I wish. However I would prefer to use a well-known format that existing programs (ideally web browsers) are able to read, because that means I can visually confirm that the images are correct before writing the code for the other program (that reads the image.) I have implemented this successfully for the PixelFormat32bppRGB format, which matches the format of a 32bpp BMP file, so if I extract the pixel data directly into the memory-mapped BMP file and prefix it with a BMP header I get a valid BMP image file that can be opened in Paint and most browsers. Unfortunately one of the machines I am testing on returns pixels in PixelFormat64bppPARGB format (presumably this is influenced by the video adapter driver) and there is no corresponding BMP pixel format for this. Converting to a 16, 24 or 32bpp BMP format slows the program down considerably (as well as being lossy) so I am looking for a file format that can use this pixel format without conversion, so I can extract directly into the memory-mapped file as I have done with the 32bpp format. What raster image file formats support 48bpp and/or 64bpp?

    Read the article

  • boost::serialization with mutable members

    - by redmoskito
    Using boost::serialization, what's the "best" way to serialize an object that contains cached, derived values in mutable members, such that cached members aren't serialized, but on deserialization, they are initialized the their appropriate default. A definition of "best" follows later, but first an example: class Example { public: Example(float n) : num(n), sqrt_num(-1.0) {} float get_num() const { return num; } // compute and cache sqrt on first read float get_sqrt() const { if(sqrt_num < 0) sqrt_num = sqrt(num); return sqrt_num; } template <class Archive> void serialize(Archive& ar, unsigned int version) { ... } private: float num; mutable float sqrt_num; }; On serialization, only the "num" member should be saved. On deserialization, the sqrt_num member must be initialized to its sentinel value indicating it needs to be computed. What is the most elegant way to implement this? In my mind, an elegant solution would avoid splitting serialize() into separate save() and load() methods (which introduces maintenance problems). One possible implementation of serialize: template <class Archive> void serialize(Archive& ar, unsigned int version) { ar & num; sqrt_num = -1.0; } This handles the deserialization case, but in the serialization case, the cached value is killed and must be recomputed. Also, I've never seen an example of boost::serialize that explicitly sets members inside of serialize(), so I wonder if this is generally not recommended. Some might suggest that the default constructor handles this, for example: int main() { Example e; { std::ifstream ifs("filename"); boost::archive::text_iarchive ia(ifs); ia >> e; } cout << e.get_sqrt() << endl; return 0; } which works in this case, but I think fails if the object receiving the deserialized data has already been initialized, as in the example below: int main() { Example ex1(4); Example ex2(9); cout << ex1.get_sqrt() << endl; // outputs 2; cout << ex2.get_sqrt() << endl; // outputs 3; // the following two blocks should implement ex2 = ex1; // save ex1 to archive { std::ofstream ofs("filename"); boost::archive::text_oarchive oa(ofs); oa << ex1; } // read it back into ex2 { std::ifstream ifs("filename"); boost::archive::text_iarchive ia(ifs); ia >> ex2; } // these should be equal now, but aren't, // since Example::serialize() doesn't modify num_sqrt cout << ex1.get_sqrt() << endl; // outputs 2; cout << ex2.get_sqrt() << endl; // outputs 3; return 0; } I'm sure this issue has come up with others, but I have struggled to find any documentation on this particular scenario. Thanks!

    Read the article

  • Perl program - Dynamic Bootstrapping code

    - by mgj
    Hi.. I need to understand the working of this particular program, It seems to be quite complicated, could you please see if you could help me understanding what this program in Perl does, I am a beginner so I hardly can understand whats happening in the code given on the following link below, Any kind of guidance or insights wrt this program is highly appreciated. Thank you...:) This program is called premove.pl.c Its associated with one more program premove.pl, Its code looks like this: #!perl open (newdata,">newdata.txt") || die("cant create new file\n");#create passwd file $linedata = ""; while($line=<>){ chomp($line); #chop($line); print newdata $line."\n"; } close(newdata); close(olddata); __END__ I am even not sure how to run the two programs mentioned here. I wonder also what does the extension of the first program signify as it has "pl.c" extension, please let me know if you know what it could mean. I need to understand it asap thats why I am posting this question, I am kind of short of time else I would try to figure it out myself, This seems to be a complex program for a beginner like me, hope you understand. Thank you again for your time.

    Read the article

  • What are the most interesting equivalences arising from the Curry-Howard Isomorphism?

    - by Tom
    I came upon the Curry-Howard Isomorphism relatively late in my programming life, and perhaps this contributes to my being utterly fascinated by it. It implies that for every programming concept there exists a precise analogue in formal logic, and vice versa. Here's an "obvious" list of such analogies, off the top of my head: program/definition | proof type/declaration | proposition inhabited type | theorem function | implication function argument | hypothesis/antecedent function result | conclusion/consequent function application | modus ponens recursion | induction identity function | tautology non-terminating function | absurdity tuple | conjunction (and) disjoint union | exclusive disjunction (xor) parametric polymorphism | universal quantification So, to my question: what are some of the more interesting/obscure implications of this isomorphism? I'm no logician so I'm sure I've only scratched the surface with this list. For example, here are some programming notions for which I'm unaware of pithy names in logic: currying | "((a & b) => c) iff (a => (b => c))" scope | "known theory + hypotheses" And here are some logical concepts which I haven't quite pinned down in programming terms: primitive type? | axiom set of valid programs? | theory ? | disjunction (or)

    Read the article

  • Monotouch or Titanium for rapid application development on IPhone?

    - by Ronnie
    As a .Net developer I always dreamed for the possibility to develop with my existing skills (c#) applications for the Iphone. Both programs require a Mac and the Iphone Sdk installed. Appcelerator Titanium was the first app I tried and it is based on exposing some Iphone native api to javascript so that they can be called using that language. Monotouch starts at $399 for beeing able to deploy on the Iphone and not on the Iphone simulator while Titanium is free. Monotouch (Monodevelop) has an Ide that is currently missing in Titanium (but you can use any editor like Textmate, Aptana...) I think both program generate at the end a native precompiled app (also if I am not sure about the size of the final app on the Iphone as I think the .Net framework calls are prelilnked at compilation time in Monotouch). I am also not sure about the full coverage of all the Iphone api and features. Titanium has also the advantage to enable Android app development but as a c# developer I still find Monotouch experience more like the Visual Studio one. Witch one would you choose and what are your experiences on Monotouch and Titanium?

    Read the article

  • How to limit traffic using multicast over localhost

    - by Shane Holloway
    I'm using multicast UDP over localhost to implement a loose collection of cooperative programs running on a single machine. The following code works well on Mac OSX, Windows and linux. The flaw is that the code will receive UDP packets outside of the localhost network as well. For example, sendSock.sendto(pkt, ('192.168.0.25', 1600)) is received by my test machine when sent from another box on my network. import platform, time, socket, select addr = ("239.255.2.9", 1600) sendSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) sendSock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 24) sendSock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_IF, socket.inet_aton("127.0.0.1")) recvSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) recvSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True) if hasattr(socket, 'SO_REUSEPORT'): recvSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, True) recvSock.bind(("0.0.0.0", addr[1])) status = recvSock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, socket.inet_aton(addr[0]) + socket.inet_aton("127.0.0.1")); while 1: pkt = "Hello host: {1} time: {0}".format(time.ctime(), platform.node()) print "SEND to: {0} data: {1}".format(addr, pkt) r = sendSock.sendto(pkt, addr) while select.select([recvSock], [], [], 0)[0]: data, fromAddr = recvSock.recvfrom(1024) print "RECV from: {0} data: {1}".format(fromAddr, data) time.sleep(2) I've attempted to recvSock.bind(("127.0.0.1", addr[1])), but that prevents the socket from receiving any multicast traffic. Is there a proper way to configure recvSock to only accept multicast packets from the 127/24 network, or do I need to test the address of each received packet?

    Read the article

  • How Does a COM Program Locate a .NET DLL Registered for COM Interop?

    - by Eric J.
    One customer wants to consume our .NET DLLs from VB6. They are designed to support reverse interop and all works fine... except: There are two separate VB6 programs in two different directories. It seems it's necessary to do one of: Copy the .NET DLL into both directories, or Install the .NET DLL in the GAC This is the customer's observation and also supported by the RegAsm documentation: After registering an assembly using Regasm.exe, you can install it in the global assembly cache so that it can be activated from any COM client. If the assembly is only going to be activated by a single application, you can place it in that application's directory. I'm confused on this point. First point of confusion: As far as I understand, the COM runtime locates the DLL using the Prog ID / Class ID. When I look in the registry at the Class ID entry, I see the full path to the .NET DLL in the CodeBase key. Why is it that a COM program using the Prog ID / Class ID doesn't locate the .NET DLL using the CodeBase? Second point of confusion: The GAC is specific to .NET. How is it involved in resolving COM references?

    Read the article

  • RDP through TCP Proxy

    - by johng100
    Hi, First time in Stackoverflow and I'm hoping someone can help me. I'm looking at a proof of concept to pass RDP traffic through a TCP Proxy/tunnel which will pass through firewalls using HTTPS. The problem has to do with deploying images to machines and so it can't be assumed that the .NET framework will be present, so C++ is being used at the deployment end of a connection. The basic system I have at present is a program which listens for client connections on a port then passes any data to a WCF service which stores it as a byte array. A deployment machine (using GSoap and C++) polls the WCF service for messages and if it finds them then passes the data onto the target server process via sockets. I know this sounds horrible, but it works for simple test clients and server passing data to and from simple test client and server programs via this WCF/C++/C# proxy layer. But I have to support traffic from RDP, VNC and possibly others, so I need a transparent proxy to do this and am wondering whether the above approach is worth pursuing. I've read up on SSH tunneling and that seems a possibility. My basic question is is it possible to tunnel RDP traffic over HTTPS using custom code. Thanks John

    Read the article

  • Is it possible that a single-threaded program is executed simultaneously on more than one CPU core?

    - by Wolfgang Plaschg
    When I run a single-threaded program that i have written on my quad core Intel i can see in the Windows Task Manager that actually all four cores of my CPU are more or less active. One core is more active than the other three, but there is also activity on those. There's no other program (besided the OS kernel of course) running that would be plausible for that activitiy. And when I close my program all activity an all cores drops down to nearly zero. All is left is a little "noise" on the cores, so I'm pretty sure all the visible activity comes directly or indirectly (like invoking system routines) from my program. Is it possible that the OS or the cores themselves try to balance some code or execution on all four cores, even it's not a multithreaded program? Do you have any links that documents this technique? Some infos to the program: It's a console app written in Qt, the Task Manager states that only one thread is running. Maybe Qt uses threads, but I don't use signals or slots, nor any GUI. Link to Task Manager screenshot: http://img97.imageshack.us/img97/6403/taskmanager.png This question is language agnostic and not tied to Qt/C++, i just want to know if Windows or Intel do to balance also single-threaded code on all cores. If they do, how does this technique work? All I can think of is, that kernel routines like reading from disk etc. is scheduled on all cores, but this won't improve performance significantly since the code still has to run synchronous to the kernel api calls. EDIT Do you know any tools to do a better analysis of single and/or multi-threaded programs than the poor Windows Task Manager?

    Read the article

  • Simple grid layout question

    - by Matt H.
    I can't believe I'm back to this after working with WPF for 3 months :) Consider very common setup: How do I configure the rowheights so that the top and bottom rows (menu bar and status bar) size to fit the height of their content, and the middle row (main content), fills the remaining available space in the program? I can't fix the height of the top/bottom rows because the height of their content can vary. <Window> <Grid> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <Menu Grid.Row=0> ...menuitems </Menu> <Grid Grid.Row=1> ...main application content here (like most programs) </Grid> <StatusBar> ...statusbaritems </StatusBar> </Grid> </Window>

    Read the article

  • simple Java "service provider frameworks"?

    - by Jason S
    I refer to "service provider framework" as discussed in Chapter 2 of Effective Java, which seems like exactly the right way to handle a problem I am having, where I need to instantiate one of several classes at runtime, based on a String to select which service, and an Configuration object (essentially an XML snippet): But how do I get the individual service providers (e.g. a bunch of default providers + some custom providers) to register themselves? interface FooAlgorithm { /* methods particular to this class of algorithms */ } interface FooAlgorithmProvider { public FooAlgorithm getAlgorithm(Configuration c); } class FooAlgorithmRegistry { private FooAlgorithmRegistry() {} static private final Map<String, FooAlgorithmProvider> directory = new HashMap<String, FooAlgorithmProvider>(); static public FooAlgorithmProvider getProvider(String name) { return directory.get(serviceName); } static public boolean registerProvider(String name, FooAlgorithmProvider provider) { if (directory.containsKey(name)) return false; directory.put(name, provider); return true; } } e.g. if I write custom classes MyFooAlgorithm and MyFooAlgorithmProvider to implement FooAlgorithm, and I distribute them in a jar, is there any way to get registerProvider to be called automatically, or will my client programs that use the algorithm have to explicitly call FooAlgorithmRegistry.registerProvider() for each class they want to use?

    Read the article

  • Windows console

    - by b-gen-jack-o-neill
    Hello. Well, I have a simple question, at least I hope its simple. I was interested in win32 console for a while. Our teacher told us, that windows console is just for DOS and real mode emulation purposes. Well, I know it is not true, becouse DOS applications are runned by emulator which only uses console to display output. Another thing I learned is that console is built into Windows since NT. Well. But what I could not find is, how actually are console programs written to use console. I use Visual C++ for programming (well, for learning). So, the only thing I need to do for using console is select console project. I first thought that windows decides wheather it run app in console or tries to run app in window mode. So I created win32 program and tried printf(). Well, I could not compile it. I know that by definition printf() prints text or variables to stdout. I also found that stdout is the console interface for output. But, I could not find what actually stdout is. So, basicly what I want to ask is, where is the difference between console app and win32 app. I thought that windows starts console when it gets command from "console-family" functions. But obvisously it does not, so there must be some code that actually commands windows to create console interface. And the second question is, when the console is created, how does windows recognize which console terminal is used for what app? I mean, what actually is stdout? Is it a area in memory , or some windows routine that is called? Thanks.

    Read the article

  • Corrupted mysql table, cause crash in mysql.h (c++)

    - by Francesco
    i've created a very simple mysql class in c+, but when happen that mysql crash , indexes of tables become corrupted, and all my c++ programs crash too because seems that are unable to recognize corrupted table and allowing me to handle the issue .. Q_RES = mysql_real_query(MY_mysql, tmp_query.c_str(), (unsigned int) tmp_query.size()); if (Q_RES != 0) { if (Q_RES == CR_COMMANDS_OUT_OF_SYNC) cout << "errorquery : CR_COMMANDS_OUT_OF_SYNC " << endl; if (Q_RES == CR_SERVER_GONE_ERROR) cout << "errorquery : CR_SERVER_GONE_ERROR " << endl; if (Q_RES == CR_SERVER_LOST) cout << "errorquery : CR_SERVER_LOST " << endl; LAST_ERROR = mysql_error(MY_mysql); if (n_retrycount < n_retry_limit) { // RETRY! n_retrycount++; sleep(1); cout << "SLEEP - query retry! " << endl; ping(); return select_sql(tmp_query); } return false; } MY_result = mysql_store_result(MY_mysql); B_stored_results = true; cout << "b8" << endl; LAST_affected_rows = (mysql_num_rows(MY_result) + 1); // coult return -1 cout << "b8-1" << endl; the program terminate with a "segmentation fault" after doing the "b8" and before the "b8-1" , Q_RES have no issue even if the table is corrupted.. i would like to know if there is a way to recognize that the table have problems and so then i can run a mysql repair or mysql check .. thanks, Francesco

    Read the article

  • Is it practical to program with your feet?

    - by bmm
    Has anyone tried using foot pedals in addition to the traditional keyboard and mouse combo to improve your effectiveness in the editor? Any actual experiences out there? Does it work, or is it just for carpal tunnel relief? I found one blog entry from a programmer who actually tried it: So now I can type using my feet for most of the modifier keys. I am using the pedals as I type this. I am still getting used to them, but the burning in my left wrist has definitely reduced. I think I can also type a little faster, but I am too lazy to do the speed tests with and without the pedals to verify this. On the negative side: Working out where to put your feet when you aren’t typing can be a little awkward. The pedals tend to move around the carpet, despite being metal and quite heavy. Some small spikes might have helped. Although the travel on the pedals is small, they are surprisingly stiff. Another programmer's experience: Anybody with hand pain must get foot pedals, since they can remove a tremendous load from your hands. I have two foot pedals, and use one for the SHIFT key, and the other for the CONTROL key. (I still type META by hand.) I have found that in the process of using the Emacs text editor to compose computer programs, I tend to use the SHIFT, CONTROL and META keys constantly, and it is easy to remove most of this load from one's hands. Some foot switch products: Savant Elite Triple Foot Switch FragPedal Bilbo Step On It!

    Read the article

  • Optimizing PHP require_once's for low disk i/o?

    - by buggedcom
    Q1) I'm designing a CMS (-who isn't!) but priority is being given to caching. Literally everything is cached. DB rows, DB id queries, Configuration data, processed data, compiled templates. Currently it has two layers of caching. The first is a opcode cache or memory cache such as apc, eaccelerator, xcache or memcached. If an entry is not found in there it is then searched for in the secondary slow cache, ie php includes. Are the opcode caches actually faster than doing a require_once to a php file with a var_export'd array of data in it? My tests are inconclusive as my development box (5.3 of XAMPP) keeps throwing errors installing any of the aforementioned programs. Q2) The CMS has numerous helper classes that are autoloaded on demand instead of loading all files. Mostly each has a require before it so no autoloading needs to take place, however this is not the question. Because a page script can have up to 50/60 helper files included I have a feeling that if the site was under pressure it would buckle because of all the i/o that this incurs. Ignore for the moment that there is output cache in place that would remove the need for what I am about to suggest, and also that opcode caches would render this moot. What I have tried to do is join all the helper files required for the scripts execution in one single file. This is achievable and works well, however it has a side effect of greatly increasing the memory usage dramatically even though technically the same code is being used. What are your thoughts and opinions on this?

    Read the article

  • Are there some cases where Python threads can safely manipulate shared state?

    - by erikg
    Some discussion in another question has encouraged me to to better understand cases where locking is required in multithreaded Python programs. Per this article on threading in Python, I have several solid, testable examples of pitfalls that can occur when multiple threads access shared state. The example race condition provided on this page involves races between threads reading and manipulating a shared variable stored in a dictionary. I think the case for a race here is very obvious, and fortunately is eminently testable. However, I have been unable to evoke a race condition with atomic operations such as list appends or variable increments. This test exhaustively attempts to demonstrate such a race: from threading import Thread, Lock import operator def contains_all_ints(l, n): l.sort() for i in xrange(0, n): if l[i] != i: return False return True def test(ntests): results = [] threads = [] def lockless_append(i): results.append(i) for i in xrange(0, ntests): threads.append(Thread(target=lockless_append, args=(i,))) threads[i].start() for i in xrange(0, ntests): threads[i].join() if len(results) != ntests or not contains_all_ints(results, ntests): return False else: return True for i in range(0,100): if test(100000): print "OK", i else: print "appending to a list without locks *is* unsafe" exit() I have run the test above without failure (100x 100k multithreaded appends). Can anyone get it to fail? Is there another class of object which can be made to misbehave via atomic, incremental, modification by threads? Do these implicitly 'atomic' semantics apply to other operations in Python? Is this directly related to the GIL?

    Read the article

  • Why do people have to use multiple versions of jQuery on the same page?

    - by reprogrammer
    I have noticed that sometimes people have to use multiple versions of jQuery in the same page (See question 1 and question 2). I assume people have to carry old versions of jQuery because some pieces of their code is based on an older version of jQuery. Obviously, this approach causes inefficiency. The ideal solution is to refactor the old code to use the newer jQuery API. I wonder if there are tools that automate the process of upgrading a piece of code to use a newer version of jQuery. I've never written programs in in either Javascript or jQuery. So, I'd like to hear from programmers experienced in these language about their opinion on this issue. In particular, I'd like to know the following. How much of problem it is to have to load multiple versions of jQuery? Have you ever had to load multiple versions of any other library in the same page? Do you know of any refactoring tools that helps you migrate your code to use the updated API? Do you think such a refactoring tool is useful? Are you willing to use it?

    Read the article

  • Why does Clojure hang after hacing performed my calculations?

    - by Thomas
    Hi all, I'm experimenting with filtering through elements in parallel. For each element, I need to perform a distance calculation to see if it is close enough to a target point. Never mind that data structures already exist for doing this, I'm just doing initial experiments for now. Anyway, I wanted to run some very basic experiments where I generate random vectors and filter them. Here's my implementation that does all of this (defn pfilter [pred coll] (map second (filter first (pmap (fn [item] [(pred item) item]) coll)))) (defn random-n-vector [n] (take n (repeatedly rand))) (defn distance [u v] (Math/sqrt (reduce + (map #(Math/pow (- %1 %2) 2) u v)))) (defn -main [& args] (let [[n-str vectors-str threshold-str] args n (Integer/parseInt n-str) vectors (Integer/parseInt vectors-str) threshold (Double/parseDouble threshold-str) random-vector (partial random-n-vector n) u (random-vector)] (time (println n vectors (count (pfilter (fn [v] (< (distance u v) threshold)) (take vectors (repeatedly random-vector)))))))) The code executes and returns what I expect, that is the parameter n (length of vectors), vectors (the number of vectors) and the number of vectors that are closer than a threshold to the target vector. What I don't understand is why the programs hangs for an additional minute before terminating. Here is the output of a run which demonstrates the error $ time lein run 10 100000 1.0 [null] 10 100000 12283 [null] "Elapsed time: 3300.856 msecs" real 1m6.336s user 0m7.204s sys 0m1.495s Any comments on how to filter in parallel in general are also more than welcome, as I haven't yet confirmed that pfilter actually works.

    Read the article

  • Jboss EAP 5 and Tomcat 6 on the same Win env. (Jboss gets Toncat`s home page when`s up)

    - by Eddie
    Hi guys, I`m newbie to Java technologies, just trying to catch up some idea of it. I was trying to build a java environment to play with Eclipse, Mysql, Tomcat and Jboss and integrate these together. I did: 1. Installed jdk1.6.0_20 (including JAVA_HOME and path variables; I work on Win Vista), mysql 5 and eclipse-jee-galileo (the latest one, 3.6 I believe) and all went good - java programs are compiled and run getting a DB connection. 2. Installed Jboss enterprise-installer-5.0.1.jar with localhost:8080 and this also went good - run.bat started it and I could log in as admin thru its home page. I integrated it with eclipse and could start and stop it from there too. 3. I got apache-tomcat-6.0.26-windows-x86 and this also runs and stops from command line and from eclipse. But this one uses localhost:8080 without asking. Now the problem is when I start jboss I get Tomcat home page and I can`t fix it. Is it probably because both now use localhost:8080? BTW, does Jboss EAP 5 contain Tomcat inside and I shouldn't add that Tomcat separately? Thanks for you help in advance, Eddie

    Read the article

  • In Perl, how can a subroutine get a coderef that points to itself?

    - by hillu
    For learning purposes, I am toying around with the idea of building event-driven programs in Perl and noticed that it might be nice if a subroutine that was registered as an event handler could, on failure, just schedule another call to itself for a later time. So far, I have come up with something like this: my $cb; my $try = 3; $cb = sub { my $rc = do_stuff(); if (!$rc && --$try) { schedule_event($cb, 10); # schedule $cb to be called in 10 seconds } else { do_other_stuff; } }; schedule_event($cb, 0); # schedule initial call to $cb to be performed ASAP Is there a way that code inside the sub can access the coderef to that sub so I could do without using an extra variable? I'd like to schedule the initial call like this. schedule_event( sub { ... }, 0); I first thought of using caller(0)[3], but this only gives me a function name, (__ANON__ if there's no name), not a code reference that has a pad attached to it.

    Read the article

< Previous Page | 700 701 702 703 704 705 706 707 708 709 710 711  | Next Page >