Search Results

Search found 18353 results on 735 pages for 'storage design'.

Page 318/735 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • 'pip install carbon' looks like it works, but pip disagrees afterward

    - by fennec
    I'm trying to use pip to install the package carbon, a package related to statistics collection. When I run pip install carbon, it looks like everything works. However, pip is unconvinced that the package is actually installed. (This ultimately causes trouble because I'm using Puppet, and have a rule to install carbon using pip, and when puppet asks pip "is this package installed?" it says "no" and it reinstalls it again.) How do I figure out what's preventing pip from recognizing the success of this installation? Here is the output of the regular install: root@statsd:/opt/graphite# pip install carbon Downloading/unpacking carbon Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 Successfully installed carbon Cleaning up... root@statsd:/opt/graphite# pip freeze | grep carbon root@statsd: Here is the verbose version of the install: root@statsd:/opt/graphite# pip install carbon -v Downloading/unpacking carbon Using version 0.9.9 (newest of versions: 0.9.9, 0.9.9, 0.9.8, 0.9.7, 0.9.6, 0.9.5) Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon running egg_info creating pip-egg-info/carbon.egg-info writing requirements to pip-egg-info/carbon.egg-info/requires.txt writing pip-egg-info/carbon.egg-info/PKG-INFO writing top-level names to pip-egg-info/carbon.egg-info/top_level.txt writing dependency_links to pip-egg-info/carbon.egg-info/dependency_links.txt writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) reading manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon running install running build running build_py creating build creating build/lib.linux-i686-2.7 creating build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_publisher.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/manhole.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/instrumentation.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/cache.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/management.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/relayrules.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/events.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/protocols.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/conf.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/rewrite.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/hashing.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/writer.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/client.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/util.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/service.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_listener.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/routers.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/storage.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/log.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/__init__.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/state.py -> build/lib.linux-i686-2.7/carbon creating build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/receiver.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/rules.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/buffers.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/__init__.py -> build/lib.linux-i686-2.7/carbon/aggregator package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) creating build/lib.linux-i686-2.7/twisted creating build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_relay_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_aggregator_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_cache_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/carbon/amqp0-8.xml -> build/lib.linux-i686-2.7/carbon running build_scripts creating build/scripts-2.7 copying and adjusting bin/validate-storage-schemas.py -> build/scripts-2.7 copying and adjusting bin/carbon-aggregator.py -> build/scripts-2.7 copying and adjusting bin/carbon-cache.py -> build/scripts-2.7 copying and adjusting bin/carbon-relay.py -> build/scripts-2.7 copying and adjusting bin/carbon-client.py -> build/scripts-2.7 changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 running install_lib copying build/lib.linux-i686-2.7/carbon/amqp_publisher.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/manhole.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp0-8.xml -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/instrumentation.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/cache.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/management.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/relayrules.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/events.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/protocols.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/conf.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/rewrite.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/hashing.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/writer.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/client.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/util.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/aggregator/receiver.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/rules.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/buffers.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/__init__.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/service.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp_listener.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/routers.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/storage.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/log.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/__init__.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/state.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/twisted/plugins/carbon_relay_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_aggregator_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_cache_plugin.py -> /opt/graphite/lib/twisted/plugins byte-compiling /opt/graphite/lib/carbon/amqp_publisher.py to amqp_publisher.pyc byte-compiling /opt/graphite/lib/carbon/manhole.py to manhole.pyc byte-compiling /opt/graphite/lib/carbon/instrumentation.py to instrumentation.pyc byte-compiling /opt/graphite/lib/carbon/cache.py to cache.pyc byte-compiling /opt/graphite/lib/carbon/management.py to management.pyc byte-compiling /opt/graphite/lib/carbon/relayrules.py to relayrules.pyc byte-compiling /opt/graphite/lib/carbon/events.py to events.pyc byte-compiling /opt/graphite/lib/carbon/protocols.py to protocols.pyc byte-compiling /opt/graphite/lib/carbon/conf.py to conf.pyc byte-compiling /opt/graphite/lib/carbon/rewrite.py to rewrite.pyc byte-compiling /opt/graphite/lib/carbon/hashing.py to hashing.pyc byte-compiling /opt/graphite/lib/carbon/writer.py to writer.pyc byte-compiling /opt/graphite/lib/carbon/client.py to client.pyc byte-compiling /opt/graphite/lib/carbon/util.py to util.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/receiver.py to receiver.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/rules.py to rules.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/buffers.py to buffers.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/service.py to service.pyc byte-compiling /opt/graphite/lib/carbon/amqp_listener.py to amqp_listener.pyc byte-compiling /opt/graphite/lib/carbon/routers.py to routers.pyc byte-compiling /opt/graphite/lib/carbon/storage.py to storage.pyc byte-compiling /opt/graphite/lib/carbon/log.py to log.pyc byte-compiling /opt/graphite/lib/carbon/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/state.py to state.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_relay_plugin.py to carbon_relay_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_aggregator_plugin.py to carbon_aggregator_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_cache_plugin.py to carbon_cache_plugin.pyc running install_data copying conf/storage-schemas.conf.example -> /opt/graphite/conf copying conf/rewrite-rules.conf.example -> /opt/graphite/conf copying conf/relay-rules.conf.example -> /opt/graphite/conf copying conf/carbon.amqp.conf.example -> /opt/graphite/conf copying conf/aggregation-rules.conf.example -> /opt/graphite/conf copying conf/carbon.conf.example -> /opt/graphite/conf running install_egg_info running egg_info creating lib/carbon.egg-info writing requirements to lib/carbon.egg-info/requires.txt writing lib/carbon.egg-info/PKG-INFO writing top-level names to lib/carbon.egg-info/top_level.txt writing dependency_links to lib/carbon.egg-info/dependency_links.txt writing manifest file 'lib/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found reading manifest file 'lib/carbon.egg-info/SOURCES.txt' writing manifest file 'lib/carbon.egg-info/SOURCES.txt' removing '/opt/graphite/lib/carbon-0.9.9-py2.7.egg-info' (and everything under it) Copying lib/carbon.egg-info to /opt/graphite/lib/carbon-0.9.9-py2.7.egg-info running install_scripts copying build/scripts-2.7/validate-storage-schemas.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-aggregator.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-cache.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-relay.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-client.py -> /opt/graphite/bin changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 writing list of installed files to '/tmp/pip-9LuJTF-record/install-record.txt' Successfully installed carbon Cleaning up... Removing temporary dir /opt/graphite/build... root@statsd:/opt/graphite# For reference, this is pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)

    Read the article

  • What initial modelling/design activities on Agile Projects do you do??

    - by dalton
    When developing an application using agile techniques, what if any initial modelling/architecture activities do you do, and how do you capture that knowledge?? I'm not after a bullet list about XP, Scrum, Crystal, DSDM..etc as I'm familiar with the methodologies. But what do you do above and beyond the guidance given by these. I find I work best by thinking the system through first, but also like the benefits of timeboxing, story cards, pairing, tdd. The closest thing I've seen so far is Scott Ambler's Initial Architecture Modelling, but was wondering what alternatives are used out there?

    Read the article

  • Porting a select loop application to Android with NDK. Design question.

    - by plaisthos
    Hi, I have an network application which uses a select loop like this: bool shutdown=false; while (!shutdown) { [do something] select(...,timeout); } THe main loop cannot work like this in an Android application anymore since the application needs to receive Intents, need to handle GUI, etc. I think I have basically three possibilities: Move the main loop to the java part of the application. Let the loop run in its own thread and somehow communicate from/to java. Screw Android <= 2.3 and use a native activity and use AInputQueue/ALooper instead of select. The first possibility is not easy since java has no select which works on fds. Simply using the select and return after each loop to java is not an elegant possibility either since that requires setting the timeout to something like 20ms to have a good response time in the java part of the program. The second probability sound nicer but I have do some communication between java and the c++/c part of the program. Things that cold work: Using a socket, kind of ugly. using native calls in the "java gui thread" and callback from native in the "c thread". Both threads need to have thread safe implementations but this is managable. I have not explored the third possibility but I think that it is not the way to go. I think I can hack something together which will work but I asking what is the best path to chose.

    Read the article

  • How to I make my bootcamp partition bootable again?

    - by KJFMusic
    I'm having a similar problem as everyone else in this posting. I have 5 partitions. 3 of which I created for my Mac OS Lion installation, Windows 7 installation and a 3rd for storage. Everything was running fine for quite sometime until recently. My Windows 7 installation has suddenly stopped booting. Instead of a start up screen I get: Windows failed to start. A recent hardware or software change might be the cause. File: \BOOT\BCD Status: 0xc000000d Info: An error occurred while attempting to read the boot configuration data Mac OS Lion starts up fine. I'm unable to mount my "Bootcamp" partition nor the "Storage" partition. On top of that "Storage" has been renamed to "disk0s5". When I installed Windows 7 it didn't recognize the "Storage" partition that was created in Lion so it merged what it thought was free diskspace (I'm assuming the same space that Mac OS recognized as Storage) to the Root Drive of Windows 7 (Bootcamp). Are you able to assist?

    Read the article

  • NFSv4 "Too many levels of symbolic links" error

    - by user1434058
    Both machines are running Ubuntu 12.04 Remote NFSv4 Client $ ls /mnt/storage/aaaaaaa_aaa/bbbb/cccc_ccccc gives this error: ls: reading directory .: Too many levels of symbolic links How can I fix this? When error occurs ls start listing the files, however PHP brakes. On the NFSv4 Server In /etc/fstab: /mnt/storage /srv/storage none bind 0 0 In /etc/exports /srv 192.168.1.0/24(rw,async,insecure,no_subtree_check,crossmnt,fsid=0,no_root_squash) /srv/storage 192.168.1.0/24(rw,async,nohide,insecure,no_subtree_check,no_root_squash) ERROR root@ds:root@ds:/mnt/storage/foreign_dbs/imdb/imdb_htmls# ls -l | head ls: reading directory .: Too many levels of symbolic links total 10302840 -rw-r--r-- 1 root root 10484 Jul 5 13:56 0019038.gz -rw-r--r-- 1 root root 16264 Mar 30 00:31 0259701.gz -rw-r--r-- 1 root root 13784 Mar 30 14:20 1000000.gz -rw-r--r-- 1 root root 12741 Mar 30 13:04 1000003.gz -rw-r--r-- 1 root root 12794 Mar 30 12:40 1000004.gz -rw-r--r-- 1 root root 13123 Mar 30 12:07 1000005.gz -rw-r--r-- 1 root root 13183 Mar 30 12:04 1000006.gz -rw-r--r-- 1 root root 13443 Jul 4 01:16 1000007.gz -rw-r--r-- 1 root root 12968 Mar 30 11:05 1000008.gz I came across it in PHP. scandir would return 1612577.gz & 1612579.gz, but skips 1612578.gz and yet the file types and properties are identical on them and this only happens on the nfs client, works 100% on the server

    Read the article

  • Oracle data warehouse design - fact table acting as a dimension?

    - by Elizabeth
    THANKS: Both answers here are very helpful, but I could only pick one. I really appreciate the advice! our datawarehouse will be used more for workflow reports than traditional analytical reports. Our users care about "current picture" far more than history. (though history matters, too.) We are a government entity that does not have costs or related calculations. Mostly just counts of people within given locations and with related history. We are using Oracle, and I have found distinct advantage in using the star join whenever possible and would like to rearchitect everything to as closely resemble the star schema as is reasonable for our business uses. Speed in this DW is vital, and a number of tests have already proven the star schema approach to me. Our "person" table is key - it contains over 4 million records and will be the most frequently used source in queries. It can be seen at the center of a star with multiple dimensions (like age, gender, affiliation, location, etc.). It is a very LONG table, particularly when I join it to the address and contact information. However, it is more like a dimension table when we start looking at history. For example, there are two different history tables that have a person key pointing to the person table. One has over 20 million records and the other has almost 50 million and grows daily. Is this table a fact table or a dimension table? Can one work as both? If so, is that going to be a big performance problem? Is it common to query more off of a dimension than a fact? What happens if a DIFFERENT fact table that uses the person table as a dimension is actually only 60,000 records (much smaller.). I think my problem is that our data and use of it does not fit with the commonly use examples of star schemas. CLARIFICATION: Some good thoughts have been added below, but perhaps I left too much out to really explain well. Here's some more info: We handle a voter database. We don't have any measures except voter counts by various groups: voter counts by party, by age, by location; voter counts by ballot type and election, by ballot status and election, etc. We do have a "voting history" log as well as an activity audit log (change of address, party, etc.). We have information on which voters are election workers and all that related information. I figure I'll get to the peripheral stuff later. For now I'm focusing on our two major "business processes": voter registration(which IS a voter.) and election turnout. In the first, voter is a fact. In the second, voter is a dimension, along with party, election, and type of ballot. (and in case anyone is worried - no we don't know HOW people vote. Just that they do. LOL ) I hope that clarifies things a bit.

    Read the article

  • TRibbon's large button's image is not centered...any ideas? easy to demonstrate at design-time.

    - by X-Ray
    i'm using delphi 2009 (updates 1, 2, 3, 4). i'm seeing something quite peculiar. the image on the button is not centered in the button when i have a large button with a large glyph! rather than being centered, the left part of the glyph starts at the center of the button. a clue is that when i: go into the action editor and select the action use the ImageIndex combobox in the object inspector, the list is empty (normally i'd see the available images in the combobox). it seems as though there's an image width property i've failed to set or an imagelist not correctly configured. i've expected the glyph on a large button should be 32x32. try the following: paste these components into an empty form add a 32x32 image to the image list set the Action1 imageindex to 0 you'll immediately see what i mean! can anyone tell me why it looks that way? i find it interesting that the ribbon demo app doesn't show this problem. i even tried the same image. thank you! object ActionManager1: TActionManager ActionBars = < item Items = < item Action = Action1 Caption = '&Action1' ImageIndex = 0 CommandProperties.ButtonSize = bsLarge end> ActionBar = RibbonGroup1 end> LargeDisabledImages = img3232 LargeImages = img3232 Left = 376 Top = 184 StyleName = 'Ribbon - Luna' object Action1: TAction Caption = 'Action1' ImageIndex = 0 end end object Ribbon1: TRibbon Left = 0 Top = 0 Width = 693 Height = 147 ActionManager = ActionManager1 Caption = 'Ribbon1' Tabs = < item Caption = 'RibbonPage1' Page = RibbonPage1 end> ExplicitLeft = 232 ExplicitTop = 80 ExplicitWidth = 0 DesignSize = ( 693 147) StyleName = 'Ribbon - Luna' object RibbonPage1: TRibbonPage Left = 0 Top = 54 Width = 692 Height = 93 Caption = 'RibbonPage1' Index = 0 object RibbonGroup1: TRibbonGroup Left = 4 Top = 3 Width = 54 Height = 86 ActionManager = ActionManager1 Caption = 'RibbonGroup1' GroupIndex = 0 end end end object img3232: TImageList Height = 32 Width = 32 Left = 376 Top = 256 end

    Read the article

  • RESTful API design question - how should one allow users to create new resource instances?

    - by Tamás
    I'm working in a research group where we intend to publish implementations of some of the algorithms we develop on the web via a RESTful API. Most of these algorithms work on small to medium size datasets, and in many cases, a user of our services might want to run multiple queries (with different parameters) on the same dataset, so for me it seems reasonable to allow users to upload their datasets in advance and refer to them in their queries later. In this sense, a dataset could be a resource in my API, and an algorithm could be another. My question is: how should I let the users upload their own datasets? I cannot simply let users upload their data to /dataset/dataset_id as letting the users invent their own dataset_ids might result in ID collision and users overwriting each other's datasets by accident. (I believe one of the most frequently used dataset ID would be test). I think an ideal way would be to have a dedicated URL (like /dataset/upload) where users can POST their datasets and the response would contain a unique ID under which the dataset was stored, but I'm not sure that it does not violate the basic principles of REST. What is the preferred way of dealing with such scenarios?

    Read the article

  • Suggest the best options to me to design the dynamic web interface using PHP MYSQL and AJAX

    - by Krishna
    Hello, I am designing a web interface for a company. I am describing the company's profile: company is currently having 5 branches and planning to extend their branches all over the country. it is an insurance surveying company. they are dealing with 6 Categories in the insurance domain, vide .. Engineering Fire Marine Motor Miscellaneous Risk Inspection and branches named as b1, b2, b3, b4, b5 and Extending. and finally they have contract with 22 companies. For each claim they are assign a unique ID. like contractcompany/category/serialno Ex: take a contracted company names as xxx, sss, zzz. xxx/Engineering/001 sss/Engineering/001 . . . xxx/Enginnering/002 sss/Engineering/002 . . . xxx/Fire/001 sss/Fire/001 . . . xxx/Fire/002 . . . xxx/Fire/002 . . . and so on..... by this way they issue the unique ID for each claim. Finally what i want is developing the interface with PHP mysql and ajax auto generating the unique id for each claim. store full details of the claims with reference to unique id. show all claims in one page, and they can view by branch wise and category wise. send monthly Report (All claims they have given and status of claims) to contract companies. give access to contracted companies, but they can view only their respective claims. Each claim has its own documents. So they can be uploaded by own company users or administrator. these files are associated with unique ID. contracted companies can view files. Give access to branches to enter new claims and update old claims. Administrator can create, update and delete all the claims and their details. Only administrator can grant new users (own company branches / contracted companies) Finally the the panel is completely database driven. Could any body can help. Thanks in advance Kindly do the needful and oblige Thanks and Regards Krishna. P [email protected]

    Read the article

  • Is it bad practice to apply class-based design to JavaScript programs?

    - by helixed
    JavaScript is a prototyped-based language, and yet it has the ability to mimic some of the features of class-based object-oriented languages. For example, JavaScript does not have a concept of public and private members, but through the magic of closures, it's still possible to provide the same functionality. Similarly, method overloading, interfaces, namespaces and abstract classes can all be added in one way or another. Lately, as I've been programming in JavaScript, I've felt like I'm trying to turn it into a class-based language instead of using it in the way it's meant to be used. It seems like I'm trying to force the language to conform to what I'm used to. The following is some JavaScript code I've written recently. It's purpose is to abstract away some of the effort involved in drawing to the HTML5 canvas element. /* Defines the Drawing namespace. */ var Drawing = {}; /* Abstract base which represents an element to be drawn on the screen. @param The graphical context in which this Node is drawn. @param position The position of the center of this Node. */ Drawing.Node = function(context, position) { return { /* The method which performs the actual drawing code for this Node. This method must be overridden in any subclasses of Node. */ draw: function() { throw Exception.MethodNotOverridden; }, /* Returns the graphical context for this Node. @return The graphical context for this Node. */ getContext: function() { return context; }, /* Returns the position of this Node. @return The position of this Node. */ getPosition: function() { return position; }, /* Sets the position of this Node. @param thePosition The position of this Node. */ setPosition: function(thePosition) { position = thePosition; } }; } /* Define the shape namespace. */ var Shape = {}; /* A circle shape implementation of Drawing.Node. @param context The graphical context in which this Circle is drawn. @param position The center of this Circle. @param radius The radius of this circle. @praram color The color of this circle. */ Shape.Circle = function(context, position, radius, color) { //check the parameters if (radius < 0) throw Exception.InvalidArgument; var node = Drawing.Node(context, position); //overload the node drawing method node.draw = function() { var context = this.getContext(); var position = this.getPosition(); context.fillStyle = color; context.beginPath(); context.arc(position.x, position.y, radius, 0, Math.PI*2, true); context.closePath(); context.fill(); } /* Returns the radius of this Circle. @return The radius of this Circle. */ node.getRadius = function() { return radius; }; /* Sets the radius of this Circle. @param theRadius The new radius of this circle. */ node.setRadius = function(theRadius) { radius = theRadius; }; /* Returns the color of this Circle. @return The color of this Circle. */ node.getColor = function() { return color; }; /* Sets the color of this Circle. @param theColor The new color of this Circle. */ node.setColor = function(theColor) { color = theColor; }; //return the node return node; }; The code works exactly like it should for a user of Shape.Circle, but it feels like it's held together with Duct Tape. Can somebody provide some insight on this?

    Read the article

  • Zero-channel RAID for High Performance MySQL Server (IBM ServeRAID 8k) : Any Experience/Recommendati

    - by prs563
    We are getting this IBM rack mount server and it has this IBM ServeRAID8k storage controller with Zero-Channel RAID and 256MB battery backed cache. It can support RAID 10 which we need for our high performance MySQL server which will have 4 x 15000K RPM 300GB SAS HDD. This is mission-critical and we want as much bandwidth and performance. Is this a good card or should we replace with another IBM RAID card? IBM ServeRAID 8k SAS Controller option provides 256 MB of battery backed 533 MHz DDR2 standard power memory in a fixed mounting arrangement. The device attaches directly to IBM planar which can provide full RAID capability. Manufacturer IBM Manufacturer Part # 25R8064 Cost Central Item # 10025907 Product Description IBM ServeRAID 8k SAS - Storage controller (zero-channel RAID) - RAID 0, 1, 5, 6, 10, 1E Device Type Storage controller (zero-channel RAID) - plug-in module Buffer Size 256 MB Supported Devices Disk array (RAID) Max Storage Devices Qty 8 RAID Level RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 1E Manufacturer Warranty 1 year warranty

    Read the article

  • jQuery "Autocomplete" plugin is messing up the order of my data

    - by Max Williams
    I'm using Jorn Zaefferer's Autocomplete plugin on a couple of different pages. In both instances, the order of displayed strings is a little bit messed up. Example 1: array of strings: basically they are in alphabetical order except for General Knowledge which has been pushed to the top: General Knowledge,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: General Knowledge,Geography,Art and Design,Business Studies,Citizenship,Design and Technology,English,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Note that Geography has been pushed to be the second item, after General Knowledge. The rest are all fine. Example 2: array of strings: as above but with Cross-curricular instead of General Knowledge. Cross-curricular,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: Cross-curricular,Citizenship,Art and Design,Business Studies,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Here, Citizenship has been pushed to the number 2 position. I've experimented a little, and it seems like there's a bug saying "put things that start with the same letter as the first item after the first item and leave the rest alone". Kind of mystifying. I've tried a bit of debugging by triggering alerts inside the autocomplete plugin code but everywhere i can see, it's using the correct order. it seems to be just when its rendered out that it goes wrong. Any ideas anyone? max EDIT - reply to Clint Thanks for pointing me at the relevant bit of code btw. To make diagnosis simpler i changed the array of values to ["carrot", "apple", "cherry"], which autocomplete re-orders to ["carrot", "cherry", "apple"]. Here's the array that it generates for stMatchSets: stMatchSets = ({'':[#1={value:"carrot", data:["carrot"], result:"carrot"}, #3={value:"apple", data:["apple"], result:"apple"}, #2={value:"cherry", data:["cherry"], result:"cherry"}], c:[#1#, #2#], a:[#3#]}) So, it's collecting the first letters together into a map, which makes sense as a first-pass matching strategy. What i'd like it to do though, is to use the given array of values, rather than the map, when it comes to populating the displayed list. I can't quite get my head around what's going on with the cache inside the guts of the code (i'm not very experienced with javascript).

    Read the article

  • How to implement List, Set, and Map in null free design?

    - by Pyrolistical
    Its great when you can return a null/empty object in most cases to avoid nulls, but what about Collection like objects? In Java, Map returns null if key in get(key) is not found in the map. The best way I can think of to avoid nulls in this situation is to return an Entry<T> object, which is either the EmptyEntry<T>, or contains the value T. Sure we avoid the null, but now you can have a class cast exception if you don't check if its an EmptyEntry<T>. Is there a better way to avoid nulls in Map's get(K)? And for argument sake, let's say this language don't even have null, so don't say just use nulls.

    Read the article

  • Geographically distributed file system with preferred locality

    - by dpb
    Hi All -- I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Each site can store files in a shared "namespace". That is, all the files would show up in the same filesystem. Each site would not send data over the WAN unless necessary. I.e., there would be local storage on each side of the WAN that would be "merged" into the same logical filesystem. Linux & Free ($$$) is a must. Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Any ideas?

    Read the article

  • Linux Mounting Problem

    - by Sam
    I have an Iomega Network Attached Storage device on my Windows network. I am trying to use a clonezilla live USB flash drive to backup my netbook to my Iomega Network Attached Storage device. The clonezilla USB flash drive runs linux. I'm having trouble getting the Network Attached Storage unit to mount using the following command: mount -t cifs -o username="myUsername" //192.168.1.100/backup /home/partimg The response from linux is: [134.730738] CIFS VFS: cifs_mount failed w/return code = -6 retrying with upper case share name [134.788461] CIFS VFS: cifs_mount failed w/return code = -6 mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) I also tried adding the following to my username: username="myUsername,domain=workgroup" but that did not change the error. I am able to ping the network attached storage unit from linux on my netbook. I also booted from a Slax Live USB Flash Drive and Slax auto-mounted my network attached storage unit via Samba. Unfortunately, I don't believe that I can run clonezilla from inside the Slax installation. Does anyone have any insight about what is wrong with my mount statement? Or is there something peculiar about Iomega drives which makes this impossible?

    Read the article

  • Windows XP Does Not Follow CNAME Shares

    - by user49349
    I am supporting a mix of Windows XP Pro and Windows 7 desktops in my Active Directory network, and I am having an odd issue with XP and CNAME records. Say I have a record in my DNS for a server with an A name of something like STORAGE.company.local and give it a CNAME of NAS.company.local. I can go onto an XP and 7 computer, and ping NAS and it will automatically resolve to STORAGE.company.local. If I am on Windows 7 and go to run and enter \\STORAGE or \\NAS, it will go to that server in Explorer. If I do the same in XP, STORAGE will work but NAS will not. It just times out Is there some setting buried in XP to make this work properly?

    Read the article

  • Why do SNMP agents need MIB files?

    - by user1495181
    After reading up on SNMP and some of the questions help here I think understand the agent role as a SNMP service to device (Like SQL, it is an API to storage). When you execute a SQL query the SQL engine does all the work and returns the result - You don't need to be aware of how the storage and where the storage is done. But MIBs are not actual storage , so what is the role of my agent? if the agent only register the MIB like i follow in this tutorial, is it not used as handler at all? Say that I want monitor my application's pending request queue, so I want an agent that all SNMP request for application_pending_request will be fired for it and it will return the queue depth. Why do I need to have an actual MIB when all I need to poll my application queue in order to get result?

    Read the article

  • usb_modeswitch not switching

    - by deniz
    After I upgraded from kernel 2.6.18 to 3.5.3 modeswitch started not to work for me. Although lsusb shows my usb modem, usb_modeswitch does not switch it. My system information is like below. I ran lsusb, dmesg, usb-devices and usb_modeswitch their output is like below. usb_modeswitch instead of switching my modem it says "No devices in default mode found. Nothing to do. Bye.". Can you offer a solution? Kernel: Linux 3.5.3 usb_modeswitch: 1.2.3-1 usb_modeswitch-data: 20120120-1 usbutils: 006-1 libusb: 1.0.8-0.1 root@localhost$ lsusb Bus 002 Device 029: ID 12d1:1446 Huawei Technologies Co., Ltd. root@localhost$ dmesg [70112.477080] usb 2-1.4: new high-speed USB device number 30 using ehci_hcd [70112.567757] scsi49 : usb-storage 2-1.4:1.0 [70112.567842] scsi50 : usb-storage 2-1.4:1.1 [70113.571433] scsi 49:0:0:0: CD-ROM HUAWEI Mass Storage 2.31 PQ: 0 ANSI: 2 [70113.572304] scsi 50:0:0:0: Direct-Access HUAWEI TF CARD Storage PQ: 0 ANSI: 2 [70113.574169] sr0: scsi-1 drive [70113.574223] sr 49:0:0:0: Attached scsi CD-ROM sr0 [70113.574250] sr 49:0:0:0: Attached scsi generic sg1 type 5 [70113.574350] sd 50:0:0:0: Attached scsi generic sg2 type 0 [70113.577173] sd 50:0:0:0: [sdb] Attached SCSI removable disk root@localhost$ usb-devices T: Bus=02 Lev=02 Prnt=02 Port=03 Cnt=01 Dev#= 30 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=12d1 ProdID=1446 Rev=00.00 S: Manufacturer=Huawei Technologies S: Product=HUAWEI Mobile C: #Ifs= 2 Cfg#= 1 Atr=c0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage I: If#= 1 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage root@localhost$ cat /etc/usb_modeswitch.d/12d1\:1446 # Huawei, newer modems TargetVendor= 0x12d1 TargetProductList="1001,1406,140b,140c,1412,141b,1433,1436,14ac,1506" MessageContent="55534243123456780000000000000011062000000100000000000000000000" root@localhost$ usb_modeswitch -c /etc/usb_modeswitch.d/12d1:1446 -v 12d1 -p 1446 -W * usb_modeswitch: handle USB devices with multiple modes * Version 1.2.3 (C) Josua Dietze 2012 * Based on libusb0 (0.1.12 and above) ! PLEASE REPORT NEW CONFIGURATIONS ! DefaultVendor= 0x12d1 DefaultProduct= 0x1446 TargetVendor= 0x12d1 TargetProduct= not set TargetClass= not set TargetProductList="1001,1406,140b,140c,1412,141b,1433,1436,14ac,1506" DetachStorageOnly=0 HuaweiMode=0 SierraMode=0 SonyMode=0 QisdaMode=0 GCTMode=0 KobilMode=0 SequansMode=0 MobileActionMode=0 CiscoMode=0 MessageEndpoint= not set MessageContent="55534243123456780000000000000011062000000100000000000000000000" NeedResponse=0 ResponseEndpoint= not set InquireDevice enabled (default) Success check disabled System integration mode disabled usb_set_debug: Setting debugging level to 15 (on) usb_os_find_busses: Skipping non bus directory devices usb_os_find_busses: Skipping non bus directory drivers usb_os_find_busses: Skipping non bus directory uevent usb_os_find_busses: Skipping non bus directory drivers_probe usb_os_find_busses: Skipping non bus directory drivers_autoprobe Looking for target devices ... No devices in target mode or class found Looking for default devices ... No devices in default mode found. Nothing to do. Bye. Thanks in advance.

    Read the article

  • What is a typical scenario for and end-user reports design?

    - by Sebastian
    Hello! I'm wondering what would be the typical scenario for using an end-user report designer. What I'm thinking of is to have a base report with all the columns that I can have, also with a basic view of the report (formatting, order of columns, etc.) and then let the user to change that format and order, take out or add (from the available columns) data to it, etc. Is that a common way to address what is called end-user designer for reports or I'm off track? I know it depends on the user (if it's someone that can handle SQL or not for example), but is it common to have a scenario where the user can build everthing from the sql query to the formatting? Thanks! Sebastian

    Read the article

  • Databases design - one link table or multiple link tables?

    - by David
    Hi there, I'm working on a front end for a database where each table essentially has a many to many relationship with all other tables. I'm not a DB admin, just a few basic DB courses. The typical solution in this case, as I understand it, would be multiple link tables to join each 'real' table. Here's what I'm proposing instead: one link table that has foreign key dependencies to all other PKs of the other tables. Is there any reason this could turn out badly in terms of scalability, flexibility, etc down the road?

    Read the article

  • Are there any tools to help the user to design a State Machine to be consumed by my application?

    - by kolrie
    When reading this question I remembered there was something I have been researching for a while now and I though Stackoverflow could be of help. I have created a framework that handles applications as state machines. Currently all the state business logic and transactions are handled via Java code. I was looking for some UI implementation that would allow the user to draw the state machines and transactions and generate a file that can later on be consumed by my framework to "run" the workflow according to one or more defined state machines. Ideally I would like to use an open standard like SCXML. The goal as the UI would be to have something like this plugin IBM have for Rational Software Architect: Do you know any editor, plugin or library that would have something similar or at least serve as a good starting point?

    Read the article

  • Zero-channel RAID for High Performance MySQL Server (IBM ServeRAID 8k) : Any Experience/Recommendation?

    - by prs563
    We are getting this IBM rack mount server and it has this IBM ServeRAID8k storage controller with Zero-Channel RAID and 256MB battery backed cache. It can support RAID 10 which we need for our high performance MySQL server which will have 4 x 15000K RPM 300GB SAS HDD. This is mission-critical and we want as much bandwidth and performance. Is this a good card or should we replace with another IBM RAID card? IBM ServeRAID 8k SAS Controller option provides 256 MB of battery backed 533 MHz DDR2 standard power memory in a fixed mounting arrangement. The device attaches directly to IBM planar which can provide full RAID capability. Manufacturer IBM Manufacturer Part # 25R8064 Cost Central Item # 10025907 Product Description IBM ServeRAID 8k SAS - Storage controller (zero-channel RAID) - RAID 0, 1, 5, 6, 10, 1E Device Type Storage controller (zero-channel RAID) - plug-in module Buffer Size 256 MB Supported Devices Disk array (RAID) Max Storage Devices Qty 8 RAID Level RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 1E Manufacturer Warranty 1 year warranty

    Read the article

  • How to design a class for managing file path ?

    - by remi bourgarel
    Hi All In my app, I generate some xml file for instance : "/xml/product/123.xml" where 123 is the product's id and 123.xml contains informations about this product. I also have "/xml/customer/123.xml" where 123.xml contains informations about the client ... 123 How can I manage these file paths : 1/ - I create the file path directly in the seralization method ? 2/ I create 2 static class : CustomerSerializationPathManager and ProductSerializationPathManager with 1 method : getPath(int customerID) and getPath(int productID) 3/ I create one static class : SerializationPathManager with 2 method : getCustomerPath(int customerID) and getProductPath(int productID) 4/ something else I'd prefer the solution 3 cause if I think there's only one reason to change this class : I change the root directory. So I'd like to have your thoughts about it... thx

    Read the article

  • LUNS access issue in ESX4 Cluster server

    - by rmustafa
    HI, I've created volumes in equallogic in PS 6000 XV(having 2 member which is in 1 pool), checked & those volumes can be easily detected my ISCSI software in windows. But the problem with ESX , not able to see the assigned disk on ESX server, I can explain what I've done: 1.Created Cluster with enabled HA & DRS 2.Added 3 ESX4 HOST 3.Added VMkernel & configured in all 3 ESX4, enabled vmotion & FT on the same adapter. 4.went to iSCSI storage adapter properties, enabled iSCSI 5.Trying to discover the available storage with the controller IP on dynamic discovery, but not able to see the assigned storage Note: the same volume is accessed to windows that means there is no issue from storage , am I right ???? Note: I wanted to mount the same volume in all 3 ESX host. Please suggest .... Thanks & Regards, Rashid Mustafa

    Read the article

  • What is the basic design idea behind the Scala for-loop implicit box/unboxing of numerical types?

    - by IODEV
    I'm trying to understand the behavior of Scala for-loop implicit box/unboxing of "numerical" types. Why does the two first fail but not the rest? 1) Fails: scala for (i:Long <- 0 to 10000000L) {} <console>:19: error: type mismatch;<br> found : Long(10000000L) required: Int for (i:Long <- 0 to 10000000L) {} ^ 2 Fails: scala for (i <- 0 to 10000000L) {} <console>:19: error: type mismatch; found : Long(10000000L) required: Int for (i <- 0 to 10000000L) {} ^ 3) Works: scala for (i:Long <- 0L to 10000000L) {} 4) Works: scala for (i <- 0L to 10000000L) {}

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >