Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 719/1071 | < Previous Page | 715 716 717 718 719 720 721 722 723 724 725 726  | Next Page >

  • Is it possible to partition more than one way at a time in SQL Server?

    - by meeting_overload
    I'm considering various ways to partition my data in SQL Server. One approach I'm looking at is to partition a particular huge table into 8 partitions, then within each of these partitions to partition on a different partition column. Is this even possible in SQL Server, or am I limited to definining one parition column+function+scheme per table? I'm interested in the more general answer, but this strategy is one I'm considering for Distributed Partitioned View, where I'd partition the data under the first scheme using DPV to distribute the huge amount of data over 8 machines, and then on each machine partition that portion of the full table on another parition key in order to be able to drop (for example) sub-paritions as required.

    Read the article

  • Load balancer - how to write one for a custom application?

    - by Poni
    Hi! I've written a simple server application which will run distributed on several machines. My question is how does a network load balancer works, in general? I've heard of round-robin and other algorithms, but what I haven't got answer to is how does the process really goes? In socket terms. The client connects to one of the load balancer machines, asks for a "free-to-connect-to" server and simply connects to it? That's the simpliest way I can think of. .. or, does it use the load balancer as a proxy (that implies that all the NBs must be always connected to the application servers, and data is transferred through them)? It's more of a general question. How would you do this? Thank you all!

    Read the article

  • git: having 2 push/pull repos in sync (or 1 push/pull and 1 pull in sync)

    - by xavjuan
    Hello, We work on multiple geographically seperate sites. Today I have our git clones all live on one site A. Then users from site B have to ssh over to do a git clone or to push in changes. These are bare repos where the update is through pushes. Ideally, for git clone/push performance, I'd like to limit having to go over ssh. I'd like to have a copy of git repo X live on site A and site B... and have some syncing mechanism between them. OR to have X live on both sites, but only allow pushing to A (and have that setup correctly at clone time on B) I'm worried about the case where someone on site A pushes changes to the repo at site A at the same time that someone on site B pushes a truely conflicting change to the repo at site B. Is there some 'sync'ing solution built into git for distributed open repos like this? Or a way to have a clone from X set the origin/parent to the X from the other site? thanks, -John

    Read the article

  • POST xml to php with apache2

    - by Berry
    I'm working on an application that receives XML data via POST, processes it with a PHP script, and returns an XML response. I'm getting the XML with this PHP code: $requestStr = file_get_contents('php://input'); $requests = simplexml_load_string($requestStr); which works fine on the Linux-based product hardware using nginx as the server. However, for testing I'd like to be able to run it on my MacBook Pro, so I can avoid the "build image, install on product, reboot product, wait, test change" loop while I do targeted development on this XML processor. I enabled "web sharing" which starts up Apache, added a rewrite rule to point a convenient URI at my development source directory and used curl to send a request to my PHP script thus: curl -H "Content-Type:text/xml" -d @request.xml http://localhost/test/path/testscript "testscript" is handled by the PHP script fine, but when it goes to read "php:://input" I get nothing -- the empty string. Anyone have a clue why this would work under Linux with nginx and not under MacOS with Apache? I've googled and searched stackoverlow.com to no avail. Thanks for any answers. UPDATE: I've discovered that at least in this configuration, reading from php://stdin will work fine, while php://input will not. Who knew?

    Read the article

  • What is the best way to password protect folder/page using php without a db or username

    - by Salt Packets
    What is the best way to password protect folder using php without a database or user name but using. Basically I have a page that will list contacts for organization and need to password protect that folder without having account for every user . Just one password that gets changes every so often and distributed to the group. I understand that it is not very secure but never the less I would like to know how to do this. In the best way. It would be nice if the password is remembered for a while once user entered it correctly.

    Read the article

  • Transitioning from Domain Authentication to SQL Server Authentication

    - by Albert Perrien
    Greetings all, I've run into a problem that has me stumped. I've put together a database in SQL Server Express, and I'm having a strange permissions problem. The database is on my development machine with a domain user: DOMAIN\albertp. My development database server is set for "SQL Server and Windows Authentication" mode. I can edit and query my database without any problems when I log in using Windows Authentication. However, when I log in to any user that uses SQL Server authentication (Including sa) I get this message when I run queries against my database. SELECT * FROM [Testing].[dbo].[AuditingReport] I get: Msg 18456, Level 14, State 1, Line 1 Login failed for user 'auditor'. I'm logged into the server from SQL Server Management Studio as 'auditor' and I don't see anything in the error log about the login failure. I've already run: Use Testing; Grant All to auditor; Go And I still get the same error. What permissions do I have to set for the database to be usable by others outside of my personal domain login? Or am I looking at the wrong problem? My ultimate goal is to have the database be accessible from a set of PHP pages, using a either a common login (hence 'auditor') or a login specific to a set of individual users.

    Read the article

  • Why does this Rails named scope return empty (uninitialized?) objects?

    - by mipadi
    In a Rails app, I have a model, Machine, that contains the following named scope: named_scope :needs_updates, lambda { { :select => self.column_names.collect{|c| "\"machines\".\"#{c}\""}.join(','), :group => self.column_names.collect{|c| "\"machines\".\"#{c}\""}.join(','), :joins => 'LEFT JOIN "machine_updates" ON "machine_updates"."machine_id" = "machines"."id"', :having => ['"machines"."manual_updates" = ? AND "machines"."in_use" = ? AND (MAX("machine_updates"."date") IS NULL OR MAX("machine_updates"."date") < ?)', true, true, UPDATE_THRESHOLD.days.ago] } } This named scope works fine in development mode. In production mode, however, it returns the 2 models as expected, but the models are empty or uninitialized; that is, actual objects are returned (not nil), but all the fields are nil. For example, when inspecting the return value of the named scope in the console, the following is returned: [#<Machine >, #<Machine >] But, as you can see, all the fields of the objects returned are set to nil. The production and development environments are essentially the same. Both are using a SQLite database. Any ideas what's going wrong?

    Read the article

  • Building a minimal plugin architecture in Python.

    - by dF
    I have an application, written in Python, which is used by a fairly technical audience (scientists). I'm looking for a good way to make the application extensible by the users, i.e. a scripting/plugin architecture. I am looking for something extremely lightweight. Most scripts, or plugins, are not going to be developed and distributed by a third-party and installed, but are going to be something whipped up by a user in a few minutes to automate a repeating task, add support for a file format, etc. So plugins should have the absolute minimum boilerplate code, and require no 'installation' other than copying to a folder (so something like setuptools entry points, or the Zope plugin architecture seems like too much.) Are there any systems like this already out there, or any projects that implement a similar scheme that I should look at for ideas / inspiration?

    Read the article

  • php cache zend framework

    - by msaif
    server side is PHP + zend framework. problem: i have huge of data appox 5000 records and no of columns are 5 in input.txt file. i like to read all data into memory only once and send some data to the every browser request. but if i update that input.txt file then updated data must be auto synchronized to that memory location. so i need to solve that problem by using memory caching technique.but caching technique has expire time.but if input.txt is updated before cache expire then i need to auto synchronize to that memory location. now i am using zend framework 1.10.is it possible in zend framework. can anybody give me some line of code of zendfrmawork i have no option to use memchached server(distributed). Only zend framwork.

    Read the article

  • Articles about replication schemes/algorithms?

    - by jkff
    I'm designing an hierarchical distributed system (every node has zero or more "master" nodes to which it propagates its current data). The data gets continuously updated and I'd like to guarantee that at least N nodes have almost-current data at any given time. I do not need complete consistency, only eventual consistency (t.i. for any time instant, the current snapshot of data should eventually appear on at least N nodes. It is tricky to define the term "current" here, but still). Nodes may fail and go back up at any moment, and there is no single "central" node. O overflowers! Point me to some good papers describing replication schemes. I've so far found one: Consistency Management in Optimistic Replication Algorithms

    Read the article

  • Eclipse 3.7 Classic Nightmare - ADT Installation

    - by Cal
    I've been trying to install the ADT for Eclipse Classic 3.7 to no avail. From what I've seen on searches, the general consensus seems to be to update the software, but alas I cannot do that, either. BELOW: An example of the error message received when trying to update Eclipse, or when attempting to install from a web location. Some sites could not be found. See the error log for more detail. Unable to read repository at http://download.eclipse.org/eclipse/updates/3.7/content.xml. Cannot assign requested address: JVM_Bind I followed the troubleshooting recommendations of Google/Android's developer section, and attempted to install ADT via archive. BELOW: The resulting error from attempting to install via archive. Cannot complete the install because one or more required items could not be found. Software being installed: Android Development Tools 11.0.0.v201105251008-128486 (com.android.ide.eclipse.adt.feature.group 11.0.0.v201105251008-128486) Missing requirement: Android Development Tools 11.0.0.v201105251008-128486 (com.android.ide.eclipse.adt.feature.group 11.0.0.v201105251008-128486) requires 'org.eclipse.gef 0.0.0' but it could not be found Now, from what I hear, the inability to update/install via Internet seems to be a proxy-related issue, however I don't believe that I'm under any such thing (I'm just using my computer connected to my home network for this). I'm using the most up-to-date versions of anything I can think of (ADT, Eclipse, SDK Tools etc). I'm using Windows 7 Ultimate 64bit, and am using the 64bit version of Eclipse Classic.

    Read the article

  • Scrum - Responding to traditional RFPs

    - by Todd Charron
    Hi all, I've seen many articles about how to put together Agile RFP's and negotiating agile contracts, but how about if you're responding to a more traditional RFP? Any advice on how to meet the requirements of the RFP while still presenting an agile approach? A lot of these traditional RFP's request specific technical implementations, timelines, and costs, while also requesting exact details about milestones and how the technical solutions will be implemented. While I'm sure in traditional waterfall it's normal to pretend that these things are facts, it seems wrong to commit to something like this if you're an agile organization just to get through the initial screening process. What methods have you used to respond to more traditional RFP's? Here's a sample one grabbed from google, http://www.investtoronto.ca/documents/rfp-web-development.pdf Particularly, "3. A detailed work plan outlining how they expect to achieve the four deliverables within the timeframe outlined. Plan for additional phases of development." and "8. The detailed cost structure, including per diem rates for team members, allocation of hours between team members, expenses and other out of pocket disbursements, and a total upset price."

    Read the article

  • continuous integration web service

    - by Josh Moore
    I am in a position where I could become a team leader of a team distributed over two countries. This team would be the tech. team for a start up company that we plan to bootstrap on limited funds. So I am trying to find out ways to minimize upfront expenses. Right now we are planning to use Java and will have a lot of junit tests. I am planing on using github for VCS and lighthouse for a bug tracker. In addition I want to add a continuous integration server but I do not know of any continuous integration servers that are offered as a web service. Does anybody know if there are continuous integration servers available in a software as a service model? P.S. if anybody knows were I can get these three services at one location that would be great to know to.

    Read the article

  • assistance with classifying tests

    - by amateur
    I have a .net c# library that I have created that I am currently creating some unit tests for. I am at present writing unit tests for a cache provider class that I have created. Being new to writing unit tests I have 2 questions These being: My cache provider class is the abstraction layer to my distributed cache - AppFabric. So to test aspects of my cache provider class such as adding to appfabric cache, removing from cache etc involves communicating with appfabric. Therefore the tests to test for such, are they still categorised as unit tests or integration tests? The above methods I am testing due to interacting with appfabric, I would like to time such methods. If they take longer than a specified benchmark, the tests have failed. Again I ask the question, can this performance benchmark test be classifed as a unit test? The way I have my tests set up I want to include all unit tests together, integration tests together etc, therefore I ask these questions that I would appreciate input on.

    Read the article

  • DLL dependant on curllib.dll - How can I fix this?

    - by haraldo
    Hi there, I'm new to developing in C++. I've developed a dll where I'm using curllib to make HTTP requests. When running the dll via depend.exe it notifies me that my dll now depends on the curllib.dll. This simply doesn't work for me. My dll is set as a static library not shared and will be distributed on its own. I cannot rely on a user having libcurl.dll installed. I thought by including libcurl into my project this is all that would be needed and my dll could be independent. If this is impossible to resolve is there an alternative method I can use to create HTTP requests? Obviously I would prefer to use libcurl. Thanks in advance.

    Read the article

  • Deploying ASP.Net MVC application

    - by a_m0d
    I've recently reached the stage where an ASP.net MVC application I am developing is ready to be deployed to the production server. I've worked out how to publish the application - I've got all the files on the server, and can access them over the internet. However, I can't work out how to deploy my database. The server has the SQL Server Management Studio Express installed, as the database used is a SQL Server Express database. I have the server instance up and running - I just don't know how to add the tables, etc. to the database. I have created the "CREATE TABLE" scripts on the development machine, but as far as I can see, Management Studio does not provide any way to actually run these scripts. I have looked through all the menu items that I could see, and none of them worked. Even using the "Create new query..." option and pasting the script in didn't work. When I try "File-Open..." and select a script to run, set the correct database from the dropdown list on the toolbar, and then execute the script, it complains about not finding the database file (even when I set the USE [...] statement to the correct path. Deleting the USE [...] statement, the script complains that it can't find the [dbo].[Invoices] object; however, it shouldn't be able to find it, because its trying to create it! tl;dr: What's the best way to make sure that the database on the production machine matches the database on my development machine?

    Read the article

  • FileInputStream for a generic file System

    - by Akhil
    I have a file that contains java serialized objects like "Vector". I have stored this file over Hadoop Distributed File System(HDFS). Now I intend to read this file (using method readObject) in one of the map task. I suppose FileInputStream in = new FileInputStream("hdfs/path/to/file"); wont' work as the file is stored over HDFS. So I thought of using org.apache.hadoop.fs.FileSystem class. But Unfortunately it does not have any method that returns FileInputStream. All it has is a method that returns FSDataInputStream but I want a inputstream that can read serialized java objects like vector from a file rather than just primitive data types that FSDataInputStream would do. Please help!

    Read the article

  • Question about mysql indexes on low to medium cardinality columns

    - by Kevin J
    I have a general question about the way that database indexing works, particularly in mysql. Let's say I have a table with a million rows with a column "ClientID" that is distributed relatively equally among 30 values. Thus, this column is very low cardinality (30) relative to the primary key (1 million). Now, I understand that you shouldn't create indexes on low cardinality fields. However, in this case, queries are only ever done with one of the 30 clientIDs. Thus, wouldn't creating an index on ClientID be helpful, as the search space is automatically reduced to 1/30th what it normally would be? Or is my understanding of how the index works flawed? Thanks

    Read the article

  • Plugin architecture in .net: unloading

    - by henchman
    Hello everybody, I need to implement a plugin architecture within c#/.net in order to load custom user defined actions data type handling code for a custom data grid / conversion / ... from non-static linked assembly files. Because the application has to handle many custom user defined actions, Iam in need for unloading them once executed in order to reduce memory usage. I found several good articles about plugin architectures, eg: ExtensionManager PluginArchitecture ... but none of them gave me enough sausage for properly unloading an assembly. As the program is to be distributed and the user defined actions are (as the name states) user defined: how to i prevent the assembly from executing malicious code (eg. closing my progra, deleting files)? Are there any other pitfalls one of you has encountered?

    Read the article

  • List of private iPhone APIs?

    - by diego nunes
    . . Hi there, everybody. . . I need to do an app to be distributed ad hoc (it doesn't need to go to the store) but I need to get the information about the "data usage" (gprs/3g traffic). It is available on the system, but there is no official API call to get that info. One app made it through Apple testing (it's called "Download Meter"), though, and I emailed the guys to see if they would share the call, but they were not in that mood. . . Is there any list of private APIs or anything like that? Does anyone have any ideas of how could I get that info? Again: the app doesn't need to go to the store, but I need to install it on stock iPhone (ad hoc will do). . Thanks.

    Read the article

  • Component based web project directory layout with git and symlinks

    - by karlthorwald
    I am planning my directory structure for a linux/apache/php web project like this: Only www.example.com/webroot/ will be exposed in apache www.example.com/ webroot/ index.php comp1/ comp2/ component/ comp1/ comp1.class.php comp1.js comp2/ comp2.class.php comp2.css lib/ lib1/ lib1.class.php the component/ and lib/ directory will only be in the php path. To make the css and js files visible in the webroot directory I am planning to use symlinks. webroot/ index.php comp1/ comp1.js (symlinked) comp2/ comp2.css (symlinked) I tried following these principles: layout by components and libraries, not by file type and not by "public' or 'non public', index.php is an exception. This is for easier development. symlinking files that need to be public for the components and libs to a public location, but still mirroring the layout. So the component and library structure is also visible in the resulting html code in the links, which might help development. git usage should be safe and always work. it would be ok to follow some procedure to add a symlink to git, but after that checking them out or changing branches should be handled safely and clean How will git handle the symlinking of the single files correctly, is there something to consider? When it comes to images I will need to link directories, how to handle that with git? component/ comp3/ comp3.class.php img/ img1.jpg img2.jpg img3.jpg They should be linked here: webroot/ comp3/ img/ (symlinked ?) If using symlinks for that has disadvantages maybe I could move images to the webroot/ tree directly, which would break the first principle for the third (git practicability). So this is a git and symlink question. But I would be interested to hear comments about the php layout, maybe you want to use the comment function for this.

    Read the article

  • Building Cocoa UIs for OS X with C# and Mono

    - by Antony Perkov
    Has anyone spent any time comparing the various Objective C bridges and associated Cocoa wrappers for Mono? I want to port an existing C# application to run on OS X. Ideally I'd run the application on Mono, and build a native Cocoa UI for it. I'm wondering which bridge would be the best choice. In case it's useful to anyone, here are some links to bridges I've found so far: CocoSharp - distributed with Mono on OS X - www.cocoa-sharp.com Monobjc - better documentation than the others (in my opinion) - www.mono-project.com/CocoaSharp and www.monobjc.net NObjective - (apparently) faster than the others - code.google.com/p/nobjective MObjc / MCocoa - code.google.com/p/mobjc and code.google.com/p/mcocoa ObjC# - www.mono-project.com/ObjCSharp

    Read the article

  • Neural Networks or Human-computer interaction

    - by Shahin
    I will be entering my third year of university in my next academic year, once I've finished my placement year as a web developer, and I would like to hear some opinions on the two modules in the Title. I'm interested in both, however I want to pick one that will be relevant to my career and that I can apply to systems I develop. I'm doing an Internet Computing degree, it covers web development, networking, database work and programming. Though I have had myself set on becoming a web developer I'm not so sure about that any more so am trying not to limit myself to that area of development. I know HCI would help me as a web developer, but do you think it's worth it? Do you think Neural Network knowledge could help me realistically in a system I write in the future? Thanks. EDIT: Hi guys, I thought it would be useful to follow-up with what I decided to do and how it's worked out. I picked Artificial Neural Networks over HCI, and I've really enjoyed it. Having a peek into cognitive science and machine learning has ignited my interest for the subject area, and I will be hoping to take on a postgraduate project a few years from now when I can afford it. I have got a job which I am starting after my final exams (which are in a few days) and I was indeed asked if I had done a module in HCI or similar. It didn't seem to matter, as it isn't a front-end developer position! I would recommend taking the module if you have it as an option, as well as any module consisting of biological computation, it will open up more doors should you want to go onto postgraduate research in the future. Thanks again, Shahin

    Read the article

  • eclipse - starting with android sdk

    - by dontHaveName
    I want start programming for android.. What I have: -Windows 7 -Eclipse Classic 4.2 -Downloaded all there required files - http://developer.android.com/sdk/installing/adding-packages.html -ADT Plugin I want install new ADT plugin.. at first I tried to download it from http://dl-ssl.google.com/android/eclipse, I add it, but if i selected it there is only "pending.." and nothing has load..(maybe internet connection?I have selected Native connection in preferences after pending it wrotes: Unable to connect to repository http://dl-ssl.google.com/android/eclipse/content.xml org.eclipse.equinox.p2.core.ProvidesException ) Thats why I download ADT plugin. So if I select downloaded ADT plugin - content of it load - developer tools and ndk plugin so I select all and click next. It loads and writes this: "Cannot complete the install because one or more required items could not be found. Software being installed: Android Development Tools 20.0.3.v201208082019-427395 (com.android.ide.eclipse.adt.feature.group 20.0.3.v201208082019-427395) Missing requirement: Android Development Tools 20.0.3.v201208082019-427395 (com.android.ide.eclipse.adt.feature.group 20.0.3.v201208082019-427395) requires 'org.eclipse.wst.sse.core 0.0.0' but it could not be found" requires 'org.eclipse.wst.sse.core 0.0.0 this problem is shown here: http://developer.android.com/resources/faq/troubleshooting.html#installeclipsecomponents but there is solution only for version 3.3 and 3.4 (I have 4.2) anyway but I tried it- I look for updates but nothing were found I really dont know where could be problem.. Thanks for any answer. (sorry for my english) I will send 1€ to somebody who can solve my problem ;) (I think all problems all for internet connection but I cant set it..)

    Read the article

  • c# multi threaded file processing

    - by user177883
    There is a folder that contains 1000 of small text files. I aim to parse and process all of them while more files are being populated in to the folder. My intention is to multithread this operation as the single threaded prototype took 6 minutes to process 1000 files. I like to have reader and writer thread(s) as following : while the reader thread(s) are reading the files, I d like to have writer thread(s) to process them. Once the reader is started reading a file, I d like to mark it as being processed, such as by renaming it, once it s read, rename it to completed. How to approach such multithreaded application ? Is it better to use a distributed hash table or a queue? Which data structure to use that would avoid locks? Would you have a better approach to this scheme that you like to share?

    Read the article

< Previous Page | 715 716 717 718 719 720 721 722 723 724 725 726  | Next Page >