Search Results

Search found 41561 results on 1663 pages for 'linux command'.

Page 424/1663 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • Cannot turn off autocommit in a script using the Django ORM

    - by Wes
    I have a command line script that uses the Django ORM and MySQL backend. I want to turn off autocommit and commit manually. For the life of me, I cannot get this to work. Here is a pared down version of the script. A row is inserted into testtable every time I run this and I get this warning from MySQL: "Some non-transactional changed tables couldn't be rolled back". #!/usr/bin/python import os import sys django_dir = os.path.abspath(os.path.normpath(os.path.join(os.path.dirname(__file__), '..'))) sys.path.append(django_dir) os.environ['DJANGO_DIR'] = django_dir os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' from django.core.management import setup_environ from myproject import settings setup_environ(settings) from django.db import transaction, connection cursor = connection.cursor() cursor.execute('SET autocommit = 0') cursor.execute('insert into testtable values (\'X\')') cursor.execute('rollback') I also tried placing the insert in a function and adding Django's commit_manually wrapper, like so: @transaction.commit_manually def myfunction(): cursor = connection.cursor() cursor.execute('SET autocommit = 0') cursor.execute('insert into westest values (\'X\')') cursor.execute('rollback') myfunction() I also tried setting DISABLE_TRANSACTION_MANAGEMENT = True in settings.py, with no further luck. I feel like I am missing something obvious. Any help you can give me is greatly appreciated. Thanks!

    Read the article

  • Does the combo of PHP5, MySQL, and a Macbook Pro constitute a LAMP stack? If not, what does?

    - by RedEye
    Hello - I mostly code in Visual Studio, I like it, but lately it's making me feel a little claustrophobic. On my MacBook Pro, I've set up PHP5 and MySQL (natively). With the built-in server on the mac, does this constitute a LAMP stack? Is Mac OSX considered a Linux Environment? I have VMWare Fusion 3, should I set up a Linux OS virtually in order to implement a LAMP stack? Should I just use CakePHP or Zend? Any guidance would be greatly appreciated.

    Read the article

  • Writing data over RxTx using usbserial?

    - by Jeach
    I'm using the RxTx library over usbserial on a Linux distro. The RxTx lib seems to behave quite differently (in a bad way) than how it works over serial. One of my biggest problems is that the RxTx SerialPortEvent.OUTPUT_BUFFER_EMPTY does not work on linux over usbserial. How do I know when I should write to the stream? Any indicators I might have missed? So far my experience with writing and reading concurrently have not been great. Does anyone know if I should lock the DATA_AVAILABLE handler from being invoked while I'm writing on the stream? Or RxTx accepts concurrent read/writes? Thanks in advance

    Read the article

  • CQRS - The query side

    - by mattcodes
    A lot of the blogsphere articles related to CQRS (command query repsonsibility) seperation seem to imply that all screens/viewmodels are flat. e.g. Name, Age, Location Of Birth etc.. and thus the suggestion that implementation wise we stick them into fast read source etc.. single table per view mySQL etc.. and pull them out with something like primitive SqlDataReader, kick that nasty nhibernate ORM etc.. However, whilst I agree that domain models dont mapped well to most screens, many of the screens that I work with are more dimensional, and Im sure this is pretty common in LOB apps. So my question is how are people handling screen where by for example it displays a summary of customer details and then a list of their orders with a [more detail] link etc.... I thought about keeping with the straight forward SQL query to the Query Database breaking off the outer join so can build a suitable ViewModel to View but it seems like overkill? Alternatively (this is starting to feel yuck) in CustomerSummaryView table have a text/big (whatever the type is in your DB) column called Orders, and the columns for the Order summary screen grid are seperated by , and rows by |. Even with XML datatype it still feeel dirty. Any thoughts on an optimal practice?

    Read the article

  • Get JVM to grow memory demand as needed up to size of VM limit?

    - by Ira Baxter
    We ship a Java application whose memory demand can vary quite a lot depending on the size of the data it is processing. If you don't set the max VM (virtual memory) size, quite often the JVM quits with an GC failure on big data. What we'd like to see, is the JVM requesting more memory, as GC fails to provide enough, until the total available VM is exhausted. e.g., start with 128Mb, and increase geometrically (or some other step) whenever the GC failed. The JVM ("Java") command line allows explicit setting of max VM sizes (various -Xm* commands), and you'd think that would be designed to be adequate. We try to do this in a .cmd file that we ship with the application. But if you pick any specific number, you get one of two bad behaviors: 1) if your number is small enough to work on most target systems (e.g., 1Gb), it isn't big enough for big data, or 2) if you make it very large, the JVM refuses to run on those systems whose actual VM is smaller than specified. How does one set up Java to use the available VM when needed, without knowing that number in advance, and without grabbing it all on startup?

    Read the article

  • Mercurial hg clone error - "abort: error: Name or service not known"

    - by Bojan Milankovic
    I have installed the latest hg package available for Fedora Linux. However, hg clone reports an error. hg clone http://localmachine001:8000/ repository reports: "abort: error: Name or service not known" localmachine001 is a computer within the local network. I can ping it from my Linux box without any problems. I can also use the same http address and browse the existing code. However, hg clone does not work. If I execute the same command from my Macintosh machine, I can easily clone the repository. Some Internet resources recommend editing .hgrc file, and adding proxy to it: [http_proxy] host=proxy:8080 I have tried that without any success. Also, I assume that proxy is not needed in this case, since the hg server machine is in my local network. Can anyone recommend me what should I do, or how could I track the problem? Thank you in advance.

    Read the article

  • Can't get Logback Eclipse plugin to display output

    - by Zombies
    I followed these instructions here: http://logback.qos.ch/consolePlugin.html I have the correct and found logback.xml, it is set up correctly, and the port is listening. Nothing shows up with logger.error("Test"); It logs to sysout fine when I remove logback.xml, which shows to me that the logback is working fine. I installed the plugin on linux by moving it to /usr/lib/eclipse/plugins ...The window shows up, but no logging events are showing up. I also added a catch all ACCEPT filter like on that link. Perhaps this is a linux permission issue?

    Read the article

  • I need help with this ogre dependent header (Qgears)

    - by commodore
    I'm 2 errors away from compiling Qgears. (Hacked Version of the Final Fantasy VII Engine) I've messed with the preprocessors to load the actual location of the ogre header files. Here are the errors: ||=== qgears, Debug ===| /home/cj/Desktop/qgears/trunk/project/linux/src/core/TextManager.h|48|error: invalid use of ‘::’| /home/cj/Desktop/qgears/trunk/project/linux/src/core/TextManager.h|48|error: expected ‘;’ before ‘m_LanguageRoot’| ||=== Build finished: 2 errors, 0 warnings ===| Here's the header file: // $Id$ #ifndef TEXT_MANAGER_h #define TEXT_MANAGER_h #include <OGRE/OgreString.h> #include <OGRE/OgreUTFString.h> #include <map> struct TextData { TextData(): text(""), width(0), height(0) { } Ogre::String name; Ogre::UTFString text; int width; int height; }; typedef std::vector<TextData> TextDataVector; class TextManager { public: TextManager(void); virtual ~TextManager(void); void SetLanguageRoot(const Ogre::String& root); void LoadTexts(const Ogre::String& file_name); void UnloadTexts(const Ogre::String& file_name); const TextData GetText(const Ogre::String& name); private: struct TextBlock { Ogre::String block_name; std::vector<TextData> text; } Ogre::String m_LanguageRoot; // Line #48 std::list<TextBlock> m_Texts; }; extern TextManager* g_TextManager; #endif // TEXT_MANAGER_h The only header file that's in include that's not a ogre header file is "map". If it helps, I'm using the Code::Blocks IDE/GCC Compiler in GNU/Linux. (Arch) I'm not sure even if I get this header fixed, I think I'll have build errors latter, but it's worth a shot. Edit: I added the semicolon and I have one more error in the header file: error: expected unqualified-id before ‘{’ token

    Read the article

  • undefined reference linker error

    - by klaus-johan
    Hi, I've stuck myself in a c++ project under linux ,for which I get an undefined reference when I try to create an object of a class that I just wrote.I believe this is an linker error caused by the fact that somewhere , somehow I should tell the linker to take into account the new class. I looked at the project properties and at the run command it executes a script (cmake.sh) . Because the project wasn't created by me , and because I'm a novice in working under linux, I just don't know how to direct the linker to do what I expect him to do !

    Read the article

  • write error: Broken pipe

    - by Fahim
    Hi, I have to run a tool on around 300 directories. Each run take around 1 minute to 30 minute or even more than that. So, I wrote a python script having a loop to run the tool on all directories one after another. my python script has code something like: for directory in directories: os.popen('runtool_exec ' + directory) But when I run the python script I get the following error messages repeatedly: .. tail: write error: Broken pipe date: write error: Broken pipe .. All I do is login on a remote server using ssh where the tool, python script, and subject directories are kept. When I individually run the tool from command prompt using command like: runtool_exec directory it works fine. "broken pipe" error is coming only when I run using the python script. Any idea, workaround? Please suggest. Thanks. Fahim

    Read the article

  • Why GPRS modem provides embedded TCP/IP stack

    - by Christian Madsen
    My colleague and I are mining the GPRS MODEM market for a module suitable for use with embedded Linux. During the market scan, we see that several vendors highlight that their MODEMs include an embedded TCP/IP stack. This makes me wonder: when we are using embedded Linux which already contains a TCP/IP stack and connects using PPP, will it make use of the stack included in the GPRS MODEM at all? My current assumption is that the stack is included for use with tiny microcontroller OS that do not supply their own stack. Also some of the MODEMs allow for running small applications IN the MODEM baseband processor which could explain the embedded stack... So: is the TCP/IP stack supplied by the GPRS MODEM superfluous when using it with an HL OS or did I overlook something?

    Read the article

  • Automate the signature of the update.rdf manifest for my firefox extension

    - by streetpc
    Hello, I'm developing a firefox extension and I'd like to provide automatic update to my beta-testers (who are not tech-savvy). Unfortunately, the update server doesn't provide HTTPS. According to the Extension Developer Guide on signing updates, I have to sign my update.rdf and provide an encoded public key in the install.rdf. There is the McCoy tool to do all of this, but it is an interactive GUI tool and I'd like to automate the extension packaging using an Ant script (as this is part of a much bigger process). I can't find a more precise description of what's happening to sign the update.rdf manifest than below, and McCoy source is an awful lot of javascript. The doc says: The add-on author creates a public/private RSA cryptographic key pair. The public part of the key is DER encoded and then base 64 encoded and added to the add-on's install.rdf as an updateKey entry. (...) Roughly speaking the update information is converted to a string, then hashed using a sha512 hashing algorithm and this hash is signed using the private key. The resultant data is DER encoded then base 64 encoded for inclusion in the update.rdf as an signature entry. I don't know well about DER encoding, but it seems like it needs some parameters. So would anyone know either the full algortihm to sign the update.rdf and install.rdf using a predefined keypair, or a scriptable alternative to McCoy whether a command-line tool like asn1coding will suffise a good/simple developer tutorial on DER encoding

    Read the article

  • Regex to find external links from the html file using grep

    - by Amar
    hello, From past few days I'm trying to develop a regex that fetch all the external links from the web pages given to it using grep. Here is my grep command grep -h -o -e "\(\(mailto:\|\(\(ht\|f\)tp\(s\?\)\)\)\://\)\{1\}\(.*\?\)" "/mnt/websites_folder/folder_to_search" -r now the grep seem to return everything after the external links in that given line Example if an html file contain something like this on same line GoogleYahoo then the given grep command return the following result http://www.google.com">Google</a><p><a href='https://yahoo.com'>Yahoo</a></p> the idea here is that if an html file contain more than one links(`irrespective in a,img etc`) in same line then the regex should fetch only the links and not all content of that line I managed to developed the same in rubular.com the regex is as follow ("|')(\b((ht|f)tps?:\/\/)(.*?)\b)("|') with work with the above input but iam not able to replicate the same in grep can anyone help I can't modify the html file so don't ask me to do that neither I can look for each specific tags and check their attributes to to get external links as it addup processing time and my application doesn't demand that Thank You

    Read the article

  • CQRS - The query side

    - by mattcodes
    A lot of the blogsphere articles related to CQRS (command query repsonsibility) seperation seem to imply that all screens/viewmodels are flat. e.g. Name, Age, Location Of Birth etc.. and thus the suggestion that implementation wise we stick them into fast read source etc.. single table per view mySQL etc.. and pull them out with something like primitive SqlDataReader, kick that nasty nhibernate ORM etc.. However, whilst I agree that domain models dont mapped well to most screens, many of the screens that I work with are more dimensional, and Im sure this is pretty common in LOB apps. So my question is how are people handling screen where by for example it displays a summary of customer details and then a list of their orders with a [more detail] link etc.... I thought about keeping with the straight forward SQL query to the Query Database breaking off the outer join so can build a suitable ViewModel to View but it seems like overkill? Alternatively (this is starting to feel yuck) in CustomerSummaryView table have a text/big (whatever the type is in your DB) column called Orders, and the columns for the Order summary screen grid are seperated by , and rows by |. Even with XML datatype it still feeel dirty. Any thoughts on an optimal practice?

    Read the article

  • What happens after a packet is captured?

    - by Rayne
    Hi all, I've been reading about what happens after packets are captured by NICs, and the more I read, the more I'm confused. Firstly, I've read that traditionally, after a packet is captured by the NIC, it gets copied to a block of memory in the kernel space, then to the user space for whatever application that then works on the packet data. Then I read about DMA, where the NIC directly copies the packet into memory, bypassing the CPU. So is the NIC - kernel memory - User space memory flow still valid? Also, do most NIC (e.g. Myricom) use DMA to improve packet capture rates? Secondly, does RSS (Receive Side Scaling) work similarly in both Windows and Linux systems? I can only find detailed explanations on how RSS works in MSDN articles, where they talk about how RSS (and MSI-X) works on Windows Server 2008. But the same concept of RSS and MSI-X should still apply for linux systems, right? Thank you. Regards, Rayne

    Read the article

  • How can I include platform-specific native libraries in the .JAR file using Eclipse?

    - by Martin Wiboe
    Hello all, I am just starting to learn JNI. I have been following a simple example, and I have created a Java app that calls a Hello World method in a native library. I'd like to target Win32 and Linux x86. My library resides in a DLL, and I can call it just fine using LoadLibrary when the DLL is added to the root of my Eclipse project. However, I can't figure out how to get Eclipse to export a runnable JAR that includes the DLL and the .SO file for Linux. So my question is basically; how would you go about creating a project in Eclipse and include several versions of the same native library? Thank you, Martin

    Read the article

  • How to find connected hosts at network (vpn or lan)

    - by Javier Novoa C.
    Hello, I'm looking for possible solutions to the following need: I have a VPN configured (using openVPN over Linux, BTW), and I want to know at any moment which hosts are connected to it. I recognize that it probably is the same thing as trying to know which hosts are connected to a lan, so any of the solutions might do the job... The fact is that I once used a hamachi vpn on linux and with it I had the chance to know which hosts were connected to a particular network where I belonged, so I was wondering if something similar might be possible in openVPN (or even any VPN and/or any LAN). Preferably, I'm looking for opensource/free sw solutions, or maybe the hints to program it myself (in the most simple way if possible, not that I don't know how to program, but I'm trying to achieve this in a simple manner). But anyway, if there are no os/fsw solutions, any other one might do... Thanks a lot! Javier, Mexico city

    Read the article

  • Generating a reasonable ctags database for Boost

    - by Robert S. Barnes
    I'm running Ubuntu 8.04 and I ran the command: $ ctags -R --c++-kinds=+p --fields=+iaS --extra=+q -f ~/.vim/tags/stdlibcpp /usr/include/c++/4.2.4/ to generate a ctags database for the standard C++ library and STL ( libstdc++ ) on my system for use with the OmniCppComplete vim script. This gave me a very reasonable 4MB tags file which seems to work fairly well. However, when I ran the same command against the installed Boost headers: $ ctags -R --c++-kinds=+p --fields=+iaS --extra=+q -f ~/.vim/tags/boost /usr/include/boost/ I ended up with a 1.4 GB tags file! I haven't tried it yet, but that seems likes it's going to be too large to be useful. Is there a way to get a slimmer, more usable tags file for my installed Boost headers? Edit Just as a note, libstdc++ includes TR1, which has allot of Boost libs in it. So there must be something weird going on for libstdc++ to come out with a 4 MB tags file and Boost to end up with a 1.4 GB tags file.

    Read the article

  • what is the difference between "./somescript.sh" and ". ./somescript.sh"

    - by Peter
    This question may sounds silly to you. Today I was following some instructions to install a software in Linux. There was a script that needs to be run first. It set some environment variables. The instruction told me to execute . ./setup.sh, but I made a mistake by executing ./setup.sh. So the env was not set. Finally I noticed this and proceeded. I want to know what exactly is the difference between both? I am completely new to Linux so please be as elaborate as possible.

    Read the article

  • Understanding the output of ldd

    - by nebukadnezzar
    I'm having a hard time understanding the output of ldd - Especially the processor identifiers. The string in question is this one: Shortest.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, from ']', not stripped I have several questions about it: What does "ELF" mean? I know that's what Linux binaries are called like (Windows Binaries are called PE Binaries, "Portable Executable" Binaries), but isn't ELF an abbreviation for something? What does LSB mean? I can't even guess it... I see the string "Intel" there, now I seriously wonder about the portability of Linux binaries, as ldd seems to expect every binary to be compiled on a intel processor... but what if it wasn't compiled on a Intel processor? Or when I attempt to run the binary on a computer that doesn't run ontop of a Intel processor? Why the ']'? My guess is it should be some sort of Linker identify, but ']' doesn't look much like a Identifier... Thanks in advance

    Read the article

  • How to find which type of system call is used by a program

    - by bala1486
    I am working on x86_64 machine. My linux kernel is also 64 bit kernel. As there are different ways to implement a system call (int 80, syscall, sysenter), i wanted to know what type of system call my machine is using. I am newbie to linux. I have written a demo program. include int main() { getpid(); return 0; } getpid() does one system call. Can anybody give me a method to find which type of system call will be used by my machine for this program.. Thank you....

    Read the article

  • Error in unzipping a file from a python script running as daemon

    - by Fedrick
    I am getting an error whenever i try to run following unzip command from a python script which is running as a daemon Command : unzip abcd.zip /dev/null Error End-of-central-directory signature not found$ a zip file, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive unzip: cannot find zipfile directory in one of abcd.zip$ abcd.zip.zip, and cannot find abcd.zip.ZIP, period. Could anyone help me in this regard? Thanks in advance.

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >