Search Results

Search found 841 results on 34 pages for 'distribute'.

Page 26/34 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • 'pip install carbon' looks like it works, but pip disagrees afterward

    - by fennec
    I'm trying to use pip to install the package carbon, a package related to statistics collection. When I run pip install carbon, it looks like everything works. However, pip is unconvinced that the package is actually installed. (This ultimately causes trouble because I'm using Puppet, and have a rule to install carbon using pip, and when puppet asks pip "is this package installed?" it says "no" and it reinstalls it again.) How do I figure out what's preventing pip from recognizing the success of this installation? Here is the output of the regular install: root@statsd:/opt/graphite# pip install carbon Downloading/unpacking carbon Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 Successfully installed carbon Cleaning up... root@statsd:/opt/graphite# pip freeze | grep carbon root@statsd: Here is the verbose version of the install: root@statsd:/opt/graphite# pip install carbon -v Downloading/unpacking carbon Using version 0.9.9 (newest of versions: 0.9.9, 0.9.9, 0.9.8, 0.9.7, 0.9.6, 0.9.5) Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon running egg_info creating pip-egg-info/carbon.egg-info writing requirements to pip-egg-info/carbon.egg-info/requires.txt writing pip-egg-info/carbon.egg-info/PKG-INFO writing top-level names to pip-egg-info/carbon.egg-info/top_level.txt writing dependency_links to pip-egg-info/carbon.egg-info/dependency_links.txt writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) reading manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon running install running build running build_py creating build creating build/lib.linux-i686-2.7 creating build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_publisher.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/manhole.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/instrumentation.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/cache.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/management.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/relayrules.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/events.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/protocols.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/conf.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/rewrite.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/hashing.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/writer.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/client.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/util.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/service.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_listener.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/routers.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/storage.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/log.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/__init__.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/state.py -> build/lib.linux-i686-2.7/carbon creating build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/receiver.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/rules.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/buffers.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/__init__.py -> build/lib.linux-i686-2.7/carbon/aggregator package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) creating build/lib.linux-i686-2.7/twisted creating build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_relay_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_aggregator_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_cache_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/carbon/amqp0-8.xml -> build/lib.linux-i686-2.7/carbon running build_scripts creating build/scripts-2.7 copying and adjusting bin/validate-storage-schemas.py -> build/scripts-2.7 copying and adjusting bin/carbon-aggregator.py -> build/scripts-2.7 copying and adjusting bin/carbon-cache.py -> build/scripts-2.7 copying and adjusting bin/carbon-relay.py -> build/scripts-2.7 copying and adjusting bin/carbon-client.py -> build/scripts-2.7 changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 running install_lib copying build/lib.linux-i686-2.7/carbon/amqp_publisher.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/manhole.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp0-8.xml -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/instrumentation.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/cache.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/management.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/relayrules.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/events.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/protocols.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/conf.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/rewrite.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/hashing.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/writer.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/client.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/util.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/aggregator/receiver.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/rules.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/buffers.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/__init__.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/service.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp_listener.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/routers.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/storage.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/log.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/__init__.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/state.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/twisted/plugins/carbon_relay_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_aggregator_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_cache_plugin.py -> /opt/graphite/lib/twisted/plugins byte-compiling /opt/graphite/lib/carbon/amqp_publisher.py to amqp_publisher.pyc byte-compiling /opt/graphite/lib/carbon/manhole.py to manhole.pyc byte-compiling /opt/graphite/lib/carbon/instrumentation.py to instrumentation.pyc byte-compiling /opt/graphite/lib/carbon/cache.py to cache.pyc byte-compiling /opt/graphite/lib/carbon/management.py to management.pyc byte-compiling /opt/graphite/lib/carbon/relayrules.py to relayrules.pyc byte-compiling /opt/graphite/lib/carbon/events.py to events.pyc byte-compiling /opt/graphite/lib/carbon/protocols.py to protocols.pyc byte-compiling /opt/graphite/lib/carbon/conf.py to conf.pyc byte-compiling /opt/graphite/lib/carbon/rewrite.py to rewrite.pyc byte-compiling /opt/graphite/lib/carbon/hashing.py to hashing.pyc byte-compiling /opt/graphite/lib/carbon/writer.py to writer.pyc byte-compiling /opt/graphite/lib/carbon/client.py to client.pyc byte-compiling /opt/graphite/lib/carbon/util.py to util.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/receiver.py to receiver.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/rules.py to rules.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/buffers.py to buffers.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/service.py to service.pyc byte-compiling /opt/graphite/lib/carbon/amqp_listener.py to amqp_listener.pyc byte-compiling /opt/graphite/lib/carbon/routers.py to routers.pyc byte-compiling /opt/graphite/lib/carbon/storage.py to storage.pyc byte-compiling /opt/graphite/lib/carbon/log.py to log.pyc byte-compiling /opt/graphite/lib/carbon/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/state.py to state.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_relay_plugin.py to carbon_relay_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_aggregator_plugin.py to carbon_aggregator_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_cache_plugin.py to carbon_cache_plugin.pyc running install_data copying conf/storage-schemas.conf.example -> /opt/graphite/conf copying conf/rewrite-rules.conf.example -> /opt/graphite/conf copying conf/relay-rules.conf.example -> /opt/graphite/conf copying conf/carbon.amqp.conf.example -> /opt/graphite/conf copying conf/aggregation-rules.conf.example -> /opt/graphite/conf copying conf/carbon.conf.example -> /opt/graphite/conf running install_egg_info running egg_info creating lib/carbon.egg-info writing requirements to lib/carbon.egg-info/requires.txt writing lib/carbon.egg-info/PKG-INFO writing top-level names to lib/carbon.egg-info/top_level.txt writing dependency_links to lib/carbon.egg-info/dependency_links.txt writing manifest file 'lib/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found reading manifest file 'lib/carbon.egg-info/SOURCES.txt' writing manifest file 'lib/carbon.egg-info/SOURCES.txt' removing '/opt/graphite/lib/carbon-0.9.9-py2.7.egg-info' (and everything under it) Copying lib/carbon.egg-info to /opt/graphite/lib/carbon-0.9.9-py2.7.egg-info running install_scripts copying build/scripts-2.7/validate-storage-schemas.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-aggregator.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-cache.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-relay.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-client.py -> /opt/graphite/bin changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 writing list of installed files to '/tmp/pip-9LuJTF-record/install-record.txt' Successfully installed carbon Cleaning up... Removing temporary dir /opt/graphite/build... root@statsd:/opt/graphite# For reference, this is pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)

    Read the article

  • Version control a content management system?

    - by Mike
    I have the following directory structure in the CMS application we have written: /application /modules /cms /filemanager /block /pages /sitemap /youtube /rss /skin /backend /default /css /js /images /frontend /default /css /js /images Application contains code specific to the current CMS implementation, i.e code for this specific cms. Modules contain reusable portions of code that we share across projects, such as libraries to work with youtube or rss feeds. We include these as git submodules, so that we can update the module in any website and push the changes back across all other projects. It makes it really easy to apply a change to our code and distribute it. We wanted to turn the CMS into a module so we get the same benefit - we can run the entire project under source control, then update the cms as required through a git-submodule. We have run into a problem however: the cms requires javascript/images/css in order for it to work correctly. Things we have thought about: We could create 2 submodules, one for cms-skin and one for cms, but this means you cannot "git pull" one version without having some idea of which versions of skin work with which versions of cms. i.e version 1.2.2 CMS might have issues with 1.0.3 CMS-Skin We could add the skin to the cms module but this has the following problems: Skin should be available on the document root, module code shouldn't be, and if it is it should probably be secured via .htaccess It doesn't seem to make any sense bundling assets with php code We could create a symlink between /skin/backend/ to go to /modules/cms/skin but does this cause any security problems, and do we want to require something like a symlink for the application to work? We could create a hook for git or a shell script that copies files from modules/cms/skin to skin/backend when an update occurs, but this means we lose the ability to edit CMS core files in a project then push them back How is this typically done in large scale cms's? How is it possible to get the source code for a cms under version control, work on the application for a client, then update the sourcecode as releases and given by the vendor? How do applications like Magento or Drupal do this?

    Read the article

  • Quantis Quantum Random Number Generator (QRNG) - any reviews?

    - by Tim Post
    I am thinking about getting one of these (PCI) to set up an internal entropy pool similar to this service who incidentally brought us fun captcha challenges. Prior to lightening my wallet, I'm hoping to gather feedback from people who may be using this device. As there is no possible 'correct' answer, I am making this CW and tagging it as subjective. I'm undertaking a project to help write Monte Carlo simulations for a non profit that distributes mosquito nets in Malaria stricken areas. The idea is to model areas to determine the best place to distribute mosquito nets. During development, I expect to consume gigs if not more of the RNG output. We really need our own source. Is this device reliable? Does it have to be re-started often? Is its bandwidth really as advertised? It passes all tests, as far as randomness goes (i.e. NIST/DIEHARD). What I don't want is something in deadlock due to some ioctl in disk sleep that does nothing but radiate heat. This is not a spamvertisement, I'm helping out of pocket and I really, really want to know if such a large purchase will bear fruit. I can't afford to build a HRNG based on radioactive decay, this looks like the next best thing. Any comments are appreciated. I will earn zero rep for this, please do not vote to close. This is no different than questions regarding the utilization of some branded GPU for some odd purpose. Answers pointing to other solutions will be gladly accepted, I'm not married to this idea.

    Read the article

  • App freezing after lunch without any Error on Iphone Simulator?

    - by Meko
    HI.I am using CoreData on my UITable to show some records.It works when I first run.But if I run my app on simulator second time it shows up with data but then it stops.Sometimes it quits from app or it only froze and i cant click on table or tab bar. I looked also on Console but I thought there is no error.Here output from console The Debugger has exited with status 0. [Session started at 2010-03-30 00:22:06 +0300.] 2010-03-30 00:22:08.660 Paparazzi2[3556:207] Creating Photo: Urban disaster 2010-03-30 00:22:08.661 Paparazzi2[3556:207] Person is: Josh 2010-03-30 00:22:08.665 Paparazzi2[3556:207] Person is: Josh 2010-03-30 00:22:08.665 Paparazzi2[3556:207] -- Person Exists : Josh-- 2010-03-30 00:22:08.719 Paparazzi2[3556:207] Creating Photo: Concrete pitch forks 2010-03-30 00:22:08.719 Paparazzi2[3556:207] Person is: Josh 2010-03-30 00:22:08.721 Paparazzi2[3556:207] Person is: Josh 2010-03-30 00:22:08.721 Paparazzi2[3556:207] -- Person Exists : Josh-- 2010-03-30 00:22:08.727 Paparazzi2[3556:207] Creating Photo: Leaves on fire 2010-03-30 00:22:08.728 Paparazzi2[3556:207] Person is: Al 2010-03-30 00:22:08.734 Paparazzi2[3556:207] Person is: Al 2010-03-30 00:22:08.734 Paparazzi2[3556:207] -- Person Exists : Al-- [Session started at 2010-03-30 00:22:08 +0300.] GNU gdb 6.3.50-20050815 (Apple version gdb-967) (Tue Jul 14 02:11:58 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-apple-darwin".sharedlibrary apply-load-rules all Attaching to process 3556. (gdb)

    Read the article

  • Display HTML page in Office 2003 or 2007 task pane via VBA

    - by Malcolm
    Is it possible to display an HTML page in an Office 2003 and/or 2007 task pane via VBA? Background: We have a complicated configuration file that our users maintain in Word (using a real editor is not an option for our audience). We would like to create several toolbar buttons that display a basic HTML page in a task pane as a form of online help for our users. The reason we want to use a task pane to display help (vs. an external browser or traditional help engine) is so that the help content is "embedded" in Word vs. displayed via a seperate application. The problem with using a regular browser or help engine to display help is that users have to manually size and position both applications so that they can see them simultaneously and its very easy to "lose" one application when togging between many applications. We don't want to go down the route of writing a VisualStudio based task pane component - we want to keep things simple (KISS) and encapsulate everything in an easy to distribute Word template file (.dot or dotx.). Suggestions?

    Read the article

  • file layout and setuptools configuration for the python bit of a multi-language library

    - by dan mackinlay
    So we're writing a full-text search framework MongoDb. MongoDB is pretty much javascript-native, so we wrote the javascript library first, and it works. Now I'm trying to write a python framework for it, which will be partially in python, but partially use those same stored javascript functions - the javascript functions are an intrinsic part of the library. On the other hand, the javascript framework does not depend on python. since they are pretty intertwined it seems like it's worthwhile keeping them in the same repository. I'm trying to work out a way of structuring the whole project to give the javascript and python frameworks equal status (maybe a ruby driver or whatever in the future?), but still allow the python library to install nicely. Currently it looks like this: (simplified a little) javascript/jstest/test1.js javascript/mongo-fulltext/search.js javascript/mongo-fulltext/util.js python/docs/indext.rst python/tests/search_test.py python/tests/__init__.py python/mongofulltextsearch/__init__.py python/mongofulltextsearch/mongo_search.py python/mongofulltextsearch/util.py python/setup.py I've skipped out a few files for simplicity, but you get the general idea; it' a pretty much standard python project... except that it depends critcally ona whole bunch of javascript which is stored in a sibling directory tree. What's the preferred setup for dealing with this kind of thing when it comes to setuptools? I can work out how to use package_data etc to install data files that live inside my python project as per the setuptools docs. The problem is if i want to use setuptools to install stuff, including the javascript files from outside the python code tree, and then also access them in a consistent way when I'm developing the python code and when it is easy_installed to someone's site. Is that supported behaviour for setuptools? Should i be using paver or distutils2 or Distribute or something? (basic distutils is not an option; the whole reason I'm doing this is to enable requirements tracking) How should i be reading the contents of those files into python scripts?

    Read the article

  • XML Schema Migration

    - by Corwin Joy
    I am working on a project where we need to save data in an XML format. The problem is, over time we expect the format / schema for our data to change. What we want to be able to do is to produce scripts to migrate our data across different schema versions. We distribute our product to thousands of customers so we need to be able to run / apply these scripts at customer sites (so we can't just do the conversions by hand). I think that what we are looking for is some kind of XML data migration tool. In my mind the ideal tool could: Do an "XML diff" of two schema to identify added/deleted/changed nodes. Allow us to specify transformation functions. So, for example, we might add a new element to our schema that is a function of the old elements. (E.g. a new element C where C = A+B, A + B are old elements). So I think I am looking for a kind of XML diff and patch tool which can also apply transformation functions. One tool I am looking at for this is Altova's MapForce . I'm sure others here have had to deal with XML data format migration. How did you handle it? Edit: One point of clarification. The "diff" I plan to do is on the schema or .xsd files. The actual changes will be made to particular data sets that follow a given schema. These data sets will be .xml files. So its a "diff" of the schema to help figure out what changes need to be made to data sets to migrate them from one scheme to another.

    Read the article

  • Python for a hobbyist programmer ( a few questions)

    - by Matt
    I'm a hobbyist programmer (only in TI-Basic before now), and after much, much, much debating with myself, I've decided to learn Python. I don't have a ton of free time to teach myself a hundred languages and all programming I do will be for personal use or for distributing to people who need them, so I decided that I needed one good, strong language to be good at. My questions: Is python powerful enough to handle most things that a typical programmer might do in his off-time? I have in mind things like complex stat generators based on user input for tabletop games, making small games, automate install processes, and build interactive websites, but probably a hundred things along those lines Does python handle networking tasks fairly well? Can python source be obscufated (mispelled I think), or is it going to be open-source by nature? The reason I ask this is because if I make something cool and distribute it, I don't want some idiot script kiddie to edit his own name in and say he wrote it And how popular is python, compared to other languages. Ideally, my language would be good and useful with help found online without extreme difficulty, but not so common that every idiot with computer knows python. I like the idea of knowing a slightly obscure language. Thanks a ton for any help you can provide.

    Read the article

  • IPhone app with SSL client certs

    - by Pavel Georgiev
    I'm building an iphone app that needs to access a web service over https using client certificates. If I put the client cert (in pkcs12 format) in the app bundle, I'm able to load it into the app and make the https call (largely thanks to stackoverflow.com). However, I need a way to distribute the app without any certs and leave it to the user to provide his own certificate. I thought I would just do that by instructing the user to import the certificate in iphone's profiles (settings-general-profiles), which is what you get by opening a .p12 file in Mail.app and then I would access that item in my app. I would expect that the certificates in profiles are available through the keychain API, but I guess I'm wrong on that. 1) Is there a way to access a certificate that I've already loaded in iphone's profile in my app? 2) What other options I have for loading a user specified certificate in my app? The only thing I can come up with is providing some interface where the user can give an URL to his .p12 cerificate, which I can then load into the app's keychain for later use, but thats not exactly user-friednly. I'm looking for something that would allow the user to put the cert on phone (email it to himself) and then load it in my app.

    Read the article

  • EXC_BAD_ACCESS NSUrlConnection

    - by Lars
    Hi all, i got an EXC_BAD_ACCESS when i perform the last line of the function (webData). -(void)requestSoap{ NSString *requestUrl = @"http://www.website.com/webservice.php"; NSString *soapMessage = @"the soap message"; //website and soapmessage are valid in original code. NSError **error; NSURLResponse *response; //Convert parameter string to url NSURL *url = [NSURL URLWithString:requestUrl]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL:url cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:10]; NSString *msgLength = [NSString stringWithFormat:@"%d", [soapMessage length]]; //Create an XML message for webservice [theRequest addValue: @"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [theRequest addValue: msgLength forHTTPHeaderField:@"Content-Length"]; [theRequest setHTTPMethod:@"POST"]; [theRequest setHTTPBody: [soapMessage dataUsingEncoding:NSUTF8StringEncoding]]; NSData *webData = [NSURLConnection sendSynchronousRequest:theRequest returningResponse:&response error:error]; } I tried not to release a thing, because what i read on the net is it's almost always a memory thing. When i debug the code (NSZombieEnabled = YES) this is what i get: [Session started at 2010-05-31 15:56:13 +0200.] GNU gdb 6.3.50-20050815 (Apple version gdb-1461.2) (Fri Mar 5 04:43:10 UTC 2010) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-apple-darwin".sharedlibrary apply-load-rules all Attaching to process 19856. test(19856) malloc: recording malloc stacks to disk using standard recorder test(19856) malloc: enabling scribbling to detect mods to free blocks test(19856) malloc: process 19832 no longer exists, stack logs deleted from /tmp/stack-logs.19832.test.w9Ek4L.index test(19856) malloc: stack logs being written into /tmp/stack-logs.19856.test.URRpQF.index Program received signal: “EXC_BAD_ACCESS”. Does anybody have a clue?? Thanks a lot! Lars

    Read the article

  • Help with pregreplace and uniqid for jquery dialog boxes

    - by Scarface
    First of all, do not be overwhelmed by the long code, I just put it for reference...I have a function that runs preg_replace on each comment users post and puts matches in a jquery dialog box with a matching open-link. For example, if there is a paragraph with two matches, they will be put inside two divs, and a jquery dialog function will be echoed twice; one for each div. While this works for one match, if there are multiple matches, it does not. I am not sure how to distribute unique ids at a point in time for each of the divs and matching dialog open-scripts. Keep in mind, I removed the preg replace function since it kind of complicates the problem. If anyone has any ideas, they will be greatly appreciated. <?php $id=uniqid(); $id2=uniqid(); echo "<div id=\"$id2\"> </div>"; ?> $.ui.dialog.defaults.bgiframe = true; $(function() { $("<?php echo"#$id2"; ?>").dialog({hide: 'clip', modal: true ,width: 600,height: 350,position: 'center', show: 'clip',stack: true,title: 'title', minHeight: 25, minWidth: 100, autoOpen: false}); $('<?php echo"#$id"; ?>').click(function() { $('<?php echo"#$id2"; ?>').dialog('open'); }) .hover( function(){ $(this).addClass("ui-state-hover"); }, function(){ $(this).removeClass("ui-state-hover"); } ).mousedown(function(){ $(this).addClass("ui-state-active"); }) .mouseup(function(){ $(this).removeClass("ui-state-active"); }); });

    Read the article

  • InstallUtil Publishing WMI Schema to 64 Bit Directory Instead of 32 Bit Directory

    - by Nick
    This is similar to this question, but it doesn't look like a good solution was ever determined, so I'm opening a new one with clarified details. We wrote a .NET service, which among other things, publishes some of the class hierarchy using WMI. On a 64-Bit machine, we are running the 32-bit version of InstallUtil to install the service. It installs successfully, but when the service runs, we receive the following error message when publishing a WMI class using Instrumentation.Publish() DirectoryNotFoundException - (Could not find a part of the path 'C:\Windows\system32\WBEM\Framework\root\MyNamespace\MyService'.) However, this directory does exist in the C:\Windows\syswow64 directory. If we manually copy that directory structure to the system32 directory, then everything works. However, we are looking for automated solution, because we have this packaged up in an MSI which we distribute onto many servers. We have tried running the 64-Bit version of InstallUtil, to see if that would work, however... and this is the really weird part... it gives us an error on install that says Installing WMI Schema: Started An exception occurred during the Install phase. System.IO.DirectoryNotFoundException: Could not find a part of the path 'C:\Windows\system32\WBEM\Framework\root\MyNamespace\MyService.mof'. It looks as if somehow, the WMI installer flipped around. Has anyone else experienced this, or know of a work around?

    Read the article

  • "Work stealing" vs. "Work shrugging (tm)"?

    - by John
    Why is it that I can find lots of information on "work stealing" and nothing on a "work shrugging(tm)" as a load-balancing strategy? I am surprised because work-stealing seems to me to have an inherent weakness when implementating efficient fine-grained load-balancing. Vis:- Relying on consumer processors to implement distribution (by actively stealing) begs the question of what these processors do when they find no work? None of the work-stealing references and implementations I have come across so far address this issue satisfactorarily for me. They either:- 1) Manage not to disclose what they do with idle processors! [Cilk] (?anyone know?) 2) Have all idle processors sleep and wake periodically and scatter messages to the four winds to see if any work has arrived [e.g. JAWS] (= way too latent & inefficient for me). 3) Assume that it is acceptable to have processors "spinning" looking for work ( = non-starter for me!) Unless anyone thinks there is a solution for this I will move on to consider a "Work Shrugging(tm)" strategy. Having the task-producing processor distribute excess load seems to me inherently capable of a much more efficient implementation. However a quick google didn't show up anything under the heading of "Work Shrugging" so any pointers to prior-art would be welcome. tx Tags I would have added if I was allowed to [work-stealing]

    Read the article

  • Creating customized .dmg files upon download

    - by Marten
    I want to distribute a cross-platform application for which the executable file is slightly different, depending on the user who downloaded it. This is done by having a placeholder string somewhere in the executable that is replaced with something user-specific upon download. The webserver that has to do these string replacements is a Linux machine. For Windows, the executable is not compressed in the installer .exe, so the string replacement is easy. For uncompressed Mac OS X .dmg files, this is also easy. However, .dmg files that are compressed with either gzip or bzip2 are not so easy. For example, in the latter case, the compressed .dmg is not one big bzip2-compressed disk image, but instead consists of a few different bzip2-compressed parts (with different block sizes) and a plist suffix. Also, decompressing and recompressing the different parts with bzip2 does not result in the original data, so I'm guessing Apple uses some different parameters to bzip2 than the command-line tool. Is there a way to generate a compressed .dmg from an uncompressed one on Linux (which does not have hdiutil)? Or maybe another suggestion for creating customized applications without pregenerating them? It should work without any input by the user.

    Read the article

  • Can I use RegFree Com with an application written in Excel VBA?

    - by Steven
    I have an application that is written in Excel VBA, myApp.xls. Currently we use InstallShield to distribute the application. Since we are moving to Windows Vista, I need to be able to install the application as a standard user. This does not allow for me to update the registry during the install process. In addition to the excel application we also have several VB6 applications. In order to install those applications, I was able to use RegFree com and Make My Manifest (MMM) as suggested by people on this forum (I greatly appreciate the insight btw!). This process, although a bit tedious, worked well. I then packaged the output from MMM in a VS '05 installer project and removed the UAC prompt on the msi using msiinfo.exe. Now I am faced with installing an application that basically lives in an Excel file. I modified a manifest that MMM created for me for one of my VB6 apps and tried to run the excel file through that, but I did not have much luck. Does anybody know of a way to do this? Does RegFree com work with VBA? Any thoughts or suggestions would be much appreciated. Thanks, Steve

    Read the article

  • Reporting System architecture for better performance

    - by pauloya
    Hi, We have a product that runs Sql Server Express 2005 and uses mainly ASP.NET. The database has around 200 tables, with a few (4 or 5) that can grow from 300 to 5000 rows per day and keep a history of 5 years, so they can grow to have 10 million rows. We have built a reporting platform, that allows customers to build reports based on templates, fields and filters. We face performance problems almost since the beginning, we try to keep reports display under 10 seconds but some of them go up to 25 seconds (specially on those customers with long history). We keep checking indexes and trying to improve the queries but we get the feeling that there's only so much we can do. Off course the fact that the queries are generated dynamically doesn't help with the optimization. We also added a few tables that keep redundant data, but then we have the added problem of maintaining this data up to date, and also Sql Express has a limit on the size of databases. We are now facing a point where we have to decide if we want to give up real time reports, or maybe cut the history to be able to have better performance. I would like to ask what is the recommended approach for this kind of systems. Also, should we start looking for third party tools/platforms? I know OLAP can be an option but can we make it work on Sql Server Express, or at least with a license that is cheap enough to distribute to thousands of deployments? Thanks

    Read the article

  • Problem with copying local data onto HDFS on a Hadoop cluster using Amazon EC2/ S3.

    - by Deepak Konidena
    Hi, I have setup a Hadoop cluster containing 5 nodes on Amazon EC2. Now, when i login into the Master node and submit the following command bin/hadoop jar <program>.jar <arg1> <arg2> <path/to/input/file/on/S3> It throws the following errors (not at the same time.) The first error is thrown when i don't replace the slashes with '%2F' and the second is thrown when i replace them with '%2F': 1) Java.lang.IllegalArgumentException: Invalid hostname in URI S3://<ID>:<SECRETKEY>@<BUCKET>/<path-to-inputfile> 2) org.apache.hadoop.fs.S3.S3Exception: org.jets3t.service.S3ServiceException: S3 PUT failed for '/' XML Error Message: The request signature we calculated does not match the signature you provided. check your key and signing method. Note: 1)when i submitted jps to see what tasks were running on the Master, it just showed 1116 NameNode 1699 Jps 1180 JobTracker leaving DataNode and TaskTracker. 2)My Secret key contains two '/' (forward slashes). And i replace them with '%2F' in the S3 URI. PS: The program runs fine on EC2 when run on a single node. Its only when i launch a cluster, i run into issues related to copying data to/from S3 from/to HDFS. And, what does distcp do? Do i need to distribute the data even after i copy the data from S3 to HDFS?(I thought, HDFS took care of that internally) IF you could direct me to a link that explains running Map/reduce programs on a hadoop cluster using Amazon EC2/S3. That would be great. Regards, Deepak.

    Read the article

  • Enterprise Eclipse Provisioning - Or - How to share your standard Eclipse setup with other developer

    - by nikhil
    We use Eclipse as the IDE for developing all sorts of Java/J2EE applications in our 150 people odd IT department. One of the common problems we have been seeing is that developers download and install different versions of Eclipse and plugins based on their personal likes and dislikes. We have been trying to bring some consistency to this and have standardized on the version and the plugins that developers should be using. So the problem now is how do we distribute this installation to the team. We have zipped the directories and shared it through a shared drive. But I am looking for a better solution using some kind of provisioning tool for Eclipse using which people can install the IDE or get updates. Has anyone faced this problem? What are your solutions to this? How do you ensure a standard Eclipse environment across developers? I found Yoxos as a potential solution to this. Does anyone have any experience with it? Can p2 be used to do this?

    Read the article

  • Abusing the word "library"

    - by William Pursell
    I see a lot of questions, both here on SO and elsewhere, about "maintaining common libraries in a VCS". That is, projects foo and bar both depend on libbaz, and the questioner is wondering how they should import the source for libbaz into the VCS for each project. My question is: WTF? If libbaz is a library, then foo doesn't need its source code at all. There are some libraries that are reasonably designed to be used in this manner (eg gnulib), but for the most part foo and bar ought to just link against the library. I guess my thinking is: if you cut-and-paste source for a library into your own source tree, then you obviously don't care about future updates to the library. If you care about updates, then just link against the library and trust the library maintainers to maintain a stable API. If you don't trust the API to remain stable, then you can't blindly update your own copy of the source anyway, so what is gained? To summarize the question: why would anyone want to maintain a copy of a library in the source code for a project rather than just linking against that library and requiring it as a dependency? If the only answer is "don't want the dependency", then why not just distribute a copy of the library along with your app, but keep them totally separate?

    Read the article

  • SQL Server: A Grouping question that's annoying me

    - by user366729
    I've been working with SQL Server for the better part of a decade, and this grouping (or partitioning, or ranking...I'm not sure what the answer is!) one has me stumped. Feels like it should be an easy one, too. I'll generalize my problem: Let's say I have 3 employees (don't worry about them quitting or anything...there's always 3), and I keep up with how I distribute their salaries on a monthly basis. Month Employee PercentOfTotal -------------------------------- 1 Alice 25% 1 Barbara 65% 1 Claire 10% 2 Alice 25% 2 Barbara 50% 2 Claire 25% 3 Alice 25% 3 Barbara 65% 3 Claire 10% As you can see, I've paid them the same percent in Months 1 and 3, but in Month 2, I've given Alice the same 25%, but Barbara got 50% and Claire got 25%. What I want to know is all the distinct distributions I've ever given. In this case there would be two -- one for months 1 and 3, and one for month 2. I'd expect the results to look something like this (NOTE: the ID, or sequencer, or whatever, doesn't matter) ID Employee PercentOfTotal -------------------------------- X Alice 25% X Barbara 65% X Claire 10% Y Alice 25% Y Barbara 50% Y Claire 25% Seems easy, right? I'm stumped! Anyone have an elegant solution? I just put together this solution while writing this question, which seems to work, but I'm wondering if there's a better way. Or maybe a different way from which I'll learn something. WITH temp_ids (Month) AS ( SELECT DISTINCT MIN(Month) FROM employees_paid GROUP BY PercentOfTotal ) SELECT EMP.Month, EMP.Employee, EMP.PercentOfTotal FROM employees_paid EMP JOIN temp_ids IDS ON EMP.Month = IDS.Month GROUP BY EMP.Month, EMP.Employee, EMP.PercentOfTotal Thanks y'all! -Ricky

    Read the article

  • How can I edit an exe's resources (File Description, Icon, etc.) using a command line utility?

    - by Coder7862396
    The whole story: I have created a fancy .NET program which has an installer created by the Visual Studio Installer (VSI). The VSI creates 2 files (setup.exe and MyProgramSetup.msi). I understand the reasons for both files being needed, however, I only want to distribute a SINGLE executable installer to users. I do not want them to see 2 files and have to choose between them. In order to do this I have merged the 2 files into a self-extracting archive using IExpress (as seen in this answer: http://stackoverflow.com/questions/535966/merge-msi-and-exe). This works well, however, the self-extracting archive that gets created has an ugly icon and confusing file info (the File Description is "Win32 Cabinet Self-Extractor" with 43 black spaces after it). I need to replace the icon with my custom one and change some of the file properties like "Description", "Company", etc. I would like to have this automatically done as a build step so having a program which is a command line/console utility would be great. I've searched for a while now and can only find one program which does exactly what I want (ResourceTuner Console: http://www.heaventools.com/command-line_resource_editor.htm) but it costs an arm and a leg and my budget is $0. Does anyone know a better way to achieve what I want, or know of a program which can replace an executable's resources without having to use a GUI? By the way, I have also tried "SiComponents Resource Builder 3" which can't even open the executable, and "Julien Audo's ResEdit" which just crashes when I execute the command: "resedit.exe -convert "Modified Resources.rc" "MyProgramSetup.exe"

    Read the article

  • What does stdole.dll do?

    - by rc1
    We have a large C# (.net 2.0) app which uses our own C++ COM component and a 3rd party fingerprint scanner library also accessed via COM. We ran into an issue where in production some events from the fingerprint library do not get fired into the C# app, although events from our own C++ COM component fired and were received just fine. Using MSINFO32 to compare the loaded modules on a working system to those on a failing system we determined that this was caused by STDOLE.DLL not being in the GAC and hence not loaded into the faulty process. Dragging this file into the GAC caused events to come back fine from the fingerprint COM library. So what does stdole.dll do? It's 16k in size so it can't be much... is it some sort of link to another library like STDOLE32? How come its absence causes such odd behavior? How do we distribute stdole.dll? This is an XCOPY deploy app and we don't use the GAC. Should we package it as a resource and use the System.EnterpriseServices.Internal.Publish.GacInstall to ensure it's in the GAC?

    Read the article

  • Makefile : Build in a separate directory tree

    - by Simone Margaritelli
    My project (an interpreted language) has a standard library composed by multiple files, each of them will be built into an .so dynamic library that the interpreter will load upon user request (with an import directive). Each source file is located into a subdirectory representing its "namespace", for instance : The build process has to create a "build" directory, then when each file is compiling has to create its namespace directory inside the "build" one, for instance, when compiling std/io/network/tcp.cc he run an mkdir command with mkdir -p build/std/io/network The Makefile snippet is : STDSRC=stdlib/std/hashing/md5.cc \ stdlib/std/hashing/crc32.cc \ stdlib/std/hashing/sha1.cc \ stdlib/std/hashing/sha2.cc \ stdlib/std/io/network/http.cc \ stdlib/std/io/network/tcp.cc \ stdlib/std/io/network/smtp.cc \ stdlib/std/io/file.cc \ stdlib/std/io/console.cc \ stdlib/std/io/xml.cc \ stdlib/std/type/reflection.cc \ stdlib/std/type/string.cc \ stdlib/std/type/matrix.cc \ stdlib/std/type/array.cc \ stdlib/std/type/map.cc \ stdlib/std/type/type.cc \ stdlib/std/type/binary.cc \ stdlib/std/encoding.cc \ stdlib/std/os/dll.cc \ stdlib/std/os/time.cc \ stdlib/std/os/threads.cc \ stdlib/std/os/process.cc \ stdlib/std/pcre.cc \ stdlib/std/math.cc STDOBJ=$(STDSRC:.cc=.so) all: stdlib stdlib: $(STDOBJ) .cc.so: mkdir -p `dirname $< | sed -e 's/stdlib/stdlib\/build/'` $(CXX) $< -o `dirname $< | sed -e 's/stdlib/stdlib\/build/'`/`basename $< .cc`.so $(CFLAGS) $(LDFLAGS) I have two questions : 1 - The problem is that the make command, i really don't know why, doesn't check if a file was modified and launch the build process on ALL the files no matter what, so if i need to build only one file, i have to build them all or use the command : make path/to/single/file.so Is there any way to solve this? 2 - Any way to do this in a "cleaner" way without have to distribute all the build directories with sources? Thanks

    Read the article

  • Backwards compatibility when using Core Data

    - by Alex
    Could anybody shed some light as to why is my app crashing with the following error on iPhone OS 2.2.1 dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation I have weak linked CoreData.framework, and have the Base SDK set to 3.0 and Deployment Target set to SDK 2.2 The app already uses other 3.0 features when available and I did not have any problems with those. But apparently the backward-compatibility methods used for other features do not work with Core Data. The app crashes before app delegate's applicationDidFinishLaunching gets called. Here's the debugger log: [Session started at 2010-05-25 20:17:03 -0400.] GNU gdb 6.3.50-20050815 (Apple version gdb-1119) (Thu May 14 05:35:37 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys001 Loading program into debugger… sharedlibrary apply-load-rules all warning: Unable to read symbols from "MessageUI" (not yet mapped into memory). warning: Unable to read symbols from "CoreData" (not yet mapped into memory). Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-12038-42 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 10755] [Switching to thread 10755] Re-enabling shared library breakpoint 1 Re-enabling shared library breakpoint 2 Re-enabling shared library breakpoint 3 Re-enabling shared library breakpoint 4 Re-enabling shared library breakpoint 5 (gdb) continue warning: Unable to read symbols for ""/Users/alex/iPhone Projects/AppName/build/Debug-iphoneos"/AppName.app/AppName" (file not found). dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation (gdb)

    Read the article

  • Having a Python package install itself under a different name

    - by cool-RR
    I'm developing a package called garlicsim. (Website.) The package is intended for Python 2.X, but I am also offerring Python 3 support on a different fork called garlicsim_py3.(1) So both of these packages live side by side on PyPI, and Python 3 users install garlicsim_py3, and Python 2 users install garlicsim. The problem is: When third party modules want to use garlicsim, they should have one package name to refer to, not two. Sure, they can do something like this: try: import garlicsim except ImportError: import garlicsim_py3 as garlicsim But I would prefer not to make the developers of these modules do this. Is there a way that garlicsim_py3 will install itself under the alias garlicsim? What I want is for a Python 3 user to be able to import garlicsim and refer to the module all the time as garlicsim, but that it will really be garlicsim_py3. I know that the Distribute project does something like this: They make it so you can import setuptools and it will be redirected into their code. I have no idea how they do it. Any ideas? (1) I've reached the decision to support Python 3 on a fork instead of in the same code base; It's important for me that the code base will be clean, and I would really not want to introduce compatibilty hacks.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >