Search Results

Search found 20761 results on 831 pages for 'chef client'.

Page 296/831 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • How should I start refactoring my mostly-procedural C++ application?

    - by oob
    We have a program written in C++ that is mostly procedural, but we do use some C++ containers from the standard library (vector, map, list, etc). We are constantly making changes to this code, so I wouldn't call it a stagnant piece of legacy code that we can just wrap up. There are a lot of issues with this code making it harder and harder for us to make changes, but I see the three biggest issues being: Many of the functions do more (way more) than one thing We violate the DRY principle left and right We have global variables and global state up the wazoo. I was thinking we should attack areas 1 and 2 first. Along the way, we can "de-globalize" our smaller functions from the bottom up by passing in information that is currently global as parameters to the lower level functions from the higher level functions and then concentrate on figuring out how to removing the need for global variables as much as possible. I just finished reading Code Complete 2 and The Pragmatic Programmer, and I learned a lot, but I am feeling overwhelmed. I would like to implement unit testing, change from a procedural to OO approach, automate testing, use a better logging system, fully validate all input, implement better error handling and many other things, but I know if we start all this at once, we would screw ourselves. I am thinking the three I listed are the most important to start with. Any suggestions are welcome. We are a team of two programmers mostly with experience with in-house scripting. It is going to be hard to justify taking the time to refactor, especially if we can't bill the time to a client. Believe it or not, this project has been successful enough to keep us busy full time and also keep several consultants busy using it for client work.

    Read the article

  • Optimal Instance Size for EC2 Sharepoint Server

    - by Rob Wilkerson
    I'm surprised that I can't find any info about this, but I'm not a Windows admin and just a novice EC2 user. I have a client who wants to stand up a Sharepoint server on EC2 for internal use. The team is small (10-20) folks and traffic will be light. Mostly, the client is looking for one place to store documents (and revisions of documents) while making access easy for authenticated users anywhere in the world. They've settled on Sharepoint and have other EC2 instances so that seems like the natural fit, but I'm trying to figure out what to recommend for them. I'm currently thinking about a Medium instance. I'm afraid to go smaller because I think Windows would need a fair amount of memory just to run, but I'm very open to suggestions. Any advice would be much appreciated. I expect that the storage itself would happen in an EBS mount, but again, suggestions welcome. Thanks for your input.

    Read the article

  • Using a random string to authenticate HMAC?

    - by mrwooster
    I am designing a simple webservice and want to use HMAC for authentication to the service. For the purpose of this question we have: a web service at example.com a secret key shared between a user and the server [K] a consumer ID which is known to the user and the server (but is not necessarily secret) [D] a message which we wish to send to the server [M] The standard HMAC implementation would involve using the secret key [K] and the message [M] to create the hash [H], but I am running into issues with this. The message [M] can be quite long and tends to be read from a file. I have found its very difficult to produce a correct hash consistently across multiple operating systems and programming languages because of hidden characters which make it into various file formats. This is of course bad implementation on the client side (100%), but I would like this webservice to be easily accessible and not have trouble with different file formats. I was thinking of an alternative, which would allow the use a short (5-10 char) random string [R] rather than the message for autentication, e.g. H = HMAC(K,R) The user then passes the random string to the server and the server checks the HMAC server side (using random string + shared secret). As far as I can see, this produces the following issues: There is no message integrity - this is ok message integrity is not important for this service A user could re-use the hash with a different message - I can see 2 ways around this Combine the random string with a timestamp so the hash is only valid for a set period of time Only allow each random string to be used once Since the client is in control of the random string, it is easier to look for collisions I should point out that the principle reason for authentication is to implement rate limiting on the API service. There is zero need for message integrity, and its not a big deal if someone can forge a single request (but it is if they can forge a very large number very quickly). I know that the correct answer is to make sure the message [M] is the same on all platforms/languages before hashing it. But, taking that out of the equation, is the above proposal an acceptable 2nd best?

    Read the article

  • Is it possible to use bittorrent for a fileserver

    - by sris
    I would like to set up a file server that is searchable, preferable via the web. I'm wondering if it would be possible to achieve this using the bittorrent protocol and have a single client sharing every single torrent on the server. I guess I could use some available tracker solution for the webinterface or write one myself. My concerns are the if there are any limits to the number of torrents a single client can share since this may potentially be 10k torrents. The number of downloading clients is very small, only myself and my relatives. The idea is to have a single place to host everything from vacation photos to musical creations. Is there any other options for this kind of file server. It should also be easy to upload files to the server.

    Read the article

  • How-To: Run CMSDK against a RAC cluster

    - by frank.closheim
    Using CMSDK in a production environment often requires a robust, reliable and failover enabled repository. When using Oracle Real Application Cluster (RAC) with your CMSDK repository you need to have a specific configuration in place to support such a setup. This post will explain the configuration steps required when running CMSDK 9.0.4.6 with Oracle WebLogic Server (WLS).In the previous CMSDK 9.0.4.2 version a RAC enabled connect string looked like this: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = rac2)(PORT = 1521))(LOAD_BALANCE = NO)(FAILOVER = ON)(CONNECT_DATA =(SERVICE_NAME = rac)(failover_mode = (type=select)(method=basic)))CMSDK 9.0.4.6 makes use of data sources to connect to the underlying database. These data sources are configured inside your Application Server, such as Oracle WebLogic Server.In Oracle WebLogic Server 10.3.4, a single data source implementation has been introduced to support an RAC cluster. It responds to Fast Application Notification (FAN) events to provide Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. XA affinity is supported at the global transaction Id level. The new feature is called WebLogic Active GridLink for RAC; which is implemented as the GridLink data source within WebLogic Server.This GridLink data source also works with Oracle Single Client Access Name (SCAN). SCAN is a feature used in RAC environments that provides a single name for clients to access any Oracle Database running in a cluster. You can think of SCAN as a cluster alias for databases in the cluster. The benefit is that the client’s connect information does not need to change if you add or remove nodes or databases in the cluster.The CMSDK 9.0.4.6 documentation describes how to create a regular JDBC data source named jdbc/OracleDS. Please refer to the following document which describes in detail how to create a GridLink data source in WLS.

    Read the article

  • Is there a simple, flat, XML-based query-able data storage solution? [closed]

    - by alex gray
    I have been in long pursuit of an XML-based query-able data store, and despite continued searches and evaluations, I have yet to find a solution that meets the my needs, which include: Data is wholly contained within XML nodes, in flat text files. There is a "native" - or at least unobtrusive - method with which to perform Create/Read/Update/Delete (CRUD) operations onto the "schema". I would consider access via http, XHR, javascript, PHP, BASH, or PERL to be unobtrusive, dependent on the complexity of the set of dependencies. Server-side file-system reads and writes. A client-side interface element, accessible in any browser without a plug-in. Some extra, preferred (but optional) requirements include: Respond to simple SQL, or similarly syntax queries. Serve the data on a bare bones https server, with no "extra stuff", either via XMLHTTPRequest, HTTP proper, or JSON. A few thoughts: What I'm looking for may be possible via some Java server implementations, but for the sake of this question, please do not suggest that - unless it meets ALL the requirements. Java, especially on the client-side is not really an option, nor is it appealing from a development viewpoint.* I know walking the filesystem is a stretch, and I've heard it's possible with XPATH or XSLT, but as far as I know, that's not ready for primetime, nor even yet a recommendation. However the ability to recursively traverse the filesystem is needed for such a system to be of useful facility. At this point, I have basically implemented what I described via, of all things, CGI and Bash, but there has to be an easier way. Thoughts?

    Read the article

  • design an extendible and pluggable business logic flow handler in php

    - by Broncha
    I am working on a project where I need to allow a pluggable way to inject business processes in the normal data flow. eg There is an ordering system. The standard flow of the application is A consumer orders an item. Pays for it and card is authorized. Admin captures the payment. Order is marked as complete and item is shipped. But this process may vary (extra steps in between) for different clients. Say a client would need to validate the location of the consumer before he is presented with a credit card form, OR his policies might require some other processes in between. I am thinking of using State Pattern for processing orders, saving the current state of the order in database, and initializing the state of order from the saved state. I would also need some mechanism, where a small plugin would be able to inject business specific states in the state machine. Am I thinking the right way? Are there already implemented patterns for this kind of situation? I am working with Codeigniter and basically this would mean for me, to redirect to proper controller according to the current state of the order. Like, if the state of the order is unconfirmed then redirect the user to details page and then change the state to pending. If some client would need to do some validation, then register an intermediate state between unconfirmed and pending Please suggest.

    Read the article

  • Asus EEE PC 1005HA (XP Home) refuses to connect to Virgin Mobile MiFi

    - by Dennis Wurster
    My client has an Asus EEE PC model 1005HA, and we're attempting to connect it to the WiFi network created by a VirginMobile MiFi unit. They also have a MacBook Pro with Snow Leopard that has absolutely no issue connecting to the MiFi. The specific symptom is that the netbook fails to lease an IP address from the MiFi unit. I supply the 12-digit numerical password (WPA) to the netbook, it throws a 'waiting for network' dialog with an indeterminate progress indicator, and then times out. Update: We've determined that this behavior has stopped when the EEE PC and the MiFi unit were taken out of the client's home, and to a different home that didn't have an existing wifi network. Similarly, when taken to a third location that didn't have wifi, the EEE PC and MiFi got along swimmingly. My current theory is that the existing wifi networks and the wifi leg of the MiFi unit are on the same channel and competing with one another. Perhaps the MacBook Pro has the capability to overcome this interference, while the EEE PC doesn't.

    Read the article

  • Why are some web clients requesting a page named "cache"?

    - by Toto
    We see errors like this in the apache error log: [Thu May 17 14:32:35 2012] [error] [client 192.168.1.1] File does not exist: /home/www-data/mywebsite.com/r/cache, referer: http://www.mywebsite.com/r/1010 It is strange because: There is no reference in the code/url about a folder/file "cache". The folder/file "cache" does not exist The client is randomly trying to access a "cache" folder everywhere on the website. It is always trying to access the folder/file "cache" following this pattern: Pattern: /level1/.../levelwhatever/filename (referer) /level1/.../levelwhatever/cache We run a LAMP (Debian stable: PHP 5.3.3-7+squeeze9. We also use APC 3.1.3p1). We use Google Analytics and AdSense. We do not know how to reproduce the problem. Note: I replaced the user's IP in the code for privacy.

    Read the article

  • Should I encrypt data in database?

    - by Tio
    I have a client, for which I'm going to do an Web application about patient care, managing patients, consults, history, calendars, everything about that basically. The problem is that this is sensitive data, patient history and such. The client insists on encrypting the data at the database level, but I think this is going to deteriorate the performance of the web app. ( But maybe I shouldn't be worried about this ) I've read the laws about data protection on health issues ( Portugal ), but isn't very specific about this ( I just questioned them about this, I'm waiting for their response ). I've read the following link, but my question is different, should I encrypt the data in the database, or not. One problem that I foresee in encrypting data, is that I'm going to need a key, this could be the user password, but we all know how user passwords are ( 12345 etc etc ), and generating a key I would have to store it somewhere, this means that the programmer, dba, whatever could have access to it, any thoughts on this? Even adding an random salt to the user password isn't going to solve the problem since I can always access it, and therefore decrypt the data.

    Read the article

  • How do I develop database-utilizing application in an agile/test-driven-development way?

    - by user39019
    I want to add databases (traditional client/server RDBMS's like Mysql/Postgresql as opposed to NoSQL, or embedded databases) to my toolbox as a developer. I've been using SQLite for simpler projects with only 1 client, but now I want to do more complicated things (ie, db-backed web development). I usually like following agile and/or test-driven-development principles. I generally code in Perl or Python. Questions: How do I test my code such that each run of the test suite starts with a 'pristine' state? Do I run a separate instance of the database server every test? Do I use a temporary database? How do I design my tables/schema so that it is flexible with respect to changing requirements? Do I start with an ORM for my language? Or do I stick to manually coding SQL? One thing I don't find appealing is having to change more than one thing (say, the CREATE TABLE statement and associated crud statements) for one change, b/c that's error prone. On the other hand, I expect ORM's to be a low slower and harder to debug than raw SQL. What is the general strategy for migrating data between one version of the program and a newer one? Do I carefully write ALTER TABLE statements between each version, or do I dump the data and import fresh in the new version?

    Read the article

  • FreeBSD's VPN & Mac OS X IPSecuritas

    - by alexus
    I need to be able to VPN in to my FreeBSD server from my Mac using IPSecuritas. I was wondering if anyone ever done something, I'm reading VPN over IPsec but that mainly covers if you had 2 nodes with 2 public IP address. My endpoint in IPSecuritas configured with MODE_CFG enabled so it'll have the other node to query my address from it's coming from. SSH is out of question, this is not a VPN solution, people who'd end up using VPN wouldn't know what to do, so I need very simple VPN the one that you get to use almost anywhere, you have a client and you have server, client makes a connection to server and boom, you in...

    Read the article

  • Input prediction and server re-simultaion

    - by Lope
    I have read plenty of articles about multiplayer principles and I have basic client-server system set up. There is however one thing I am not clear on. When player enters input, it is sent to the server and steps back in time to check if what should have happened at the time of that input and it resimulates the world again. So far everything's clear. All articles took shooting as an example, because it is easy to explain and it is pretty straightforward, but I believe movement is more complicated. Imagine following situation: 2 players move towards each other. A------<------B Player A stops halfway towards the collision point, but there is lag spike so the command does not arrive on the server for a second or so. Current state of the world on the server (and on the other clients as well) at the time when input arrives is this: [1]: -------AB------- The command arrives and we go back in time and re-simulate the world, the result is this: [2]: ---AB----------- Player A sees situation [2] which is correct, but the player is suddenly teleported from the position in [1] (center) to the position in [2]. Is this how this is supposed to work? Point of the client prediction is to give lagged player feeling that everything is smooth, not to ruin experience for other players. Alternative is to discard timestamp on the player's input and handle it when it arrives on the server without going back in time. This, however, creates even more severe problems for lagged player (even if he is lagging just a bit)

    Read the article

  • Wrong owner and group for files created under a samba shared directory

    - by agmao
    I am trying to make writing to a shared samba directory work. I got a very weird problem. Now the shared directory is writable from a client machine. But the files created under the samba share directory have weird owner and group names. I am writing to the shared directory as user mike under the client machine, but the file created always has user and group name as steve instead... Does anybody know why that would happen...? Another thing I just noticed is that on the samba server, the files have owner and user name as samba, which I created for samba clients. Thanks a lot

    Read the article

  • How to protect Ruby on Rails code on external server?

    - by Phil Byobu
    I have to deploy a Ruby on Rails Applications on a client's server and I do not want them to be able to view or modify the source code. How would you protect the code technically? I thought about building a linux-based virtual machine with an encrypted filesystem where the application code resides. The client has no root access, or direct access to the system at all. All services start automatically and the application is ready to use. What would you suggest?

    Read the article

  • Resume on 30 Days of SharePoint

    Dear readers, as you might have noticed... It was an organisational desaster on my end! Even though I continued my studies and research on Microsoft SharePoint 2013 during the last 30 days, I wasn't able to write an article a day to keep you posted on my progress. Nonetheless, I gathered a good number of additional blogs, mainly SharePoint MVP sites, and online forums which will be helpful in the next couple of weeks while I'm actually going to develop a C#-based client which will enable an existing 'legacy' application to SharePoint as a document management system (DMS) besides other already existing solutions. Finding excuses Well, no. Not really. I simply didn't block any or enough time every day to write down my progress during my own challenge. My log book on learning about SharePoint stands at 41 hours and 15 minutes during this month. Which means that I spent an average of more than 1 hour per day on getting into SharePoint. I know that might sound a little bit low but also keep in mind that I went for the challenge on top of my daily job and private responsibilities. During the same period there had been two priority 0 incidents from clients - external root cause - which took presedence over this leisure project. More to come Anyway, it was a first trial and despite the low level of reporting on my blog, I'm confident about what I learned during the last 30 days, and I'm ready to implement the client's requirements. At least, I would say that I have a better understanding about the road map or the path to walk during the next month. As time and secrecy allows I'm going to note down some bits and pieces... During the process of development, I'm going to 'cheat' on the challenge summary article and add links to those new entries. Just for the sake of completeness. Next challenge? Hmm, there had been ideas during the last meetup of the Mauritius Software Craftsmanship Community (MSCC) regarding certifications in IT and eventually we might organise some kind of a study group for specific exams, most probably Microsoft exams towards MCSD Web Developer or Windows Developer.

    Read the article

  • Google Analytics unexplained spike

    - by Dianne
    My client's Google Analytics has had a spike everyday from May 6th (from 0 - 100.) This is in a city that he is not optimized for and does very little business in. The hits are coming in direct to the website. My client is concerned that it has something to do with competition using his site as a price shopping device. I can't view the ip to see where they are coming from and his site is not built in PHP so the work around doesn't work here. Any thoughts? Could it be a "referring site" situation and if so is there a way for me to find out what the referring site is?

    Read the article

  • Single-developer GIT workflow (moving from straightforward FTP)

    - by melat0nin
    I'm trying to decide whether moving to VCS is sensible for me. I am a single web developer in a small organisation (5 people). I'm thinking of VCS (Git) for these reasons: version control, offsite backup, centralised code repository (can access from home). At the moment I work on a live server generally. I FTP in, make my edits and save them, then reupload and refresh. The edits are usually to theme/plugin files for CMSes (e.g. concrete5 or Wordpress). This works well but provides no backup and no version control. I'm wondering how best to integrate VCS into this procedure. I would envisage setting up a Git server on the company's web server, but I'm not clear how to push changes out to client accounts (usually VPSes on the same server) - at the moment I simply log into SFTP with their details and make the changes directly. I'm also not sure what would sensibly represent a repository - would each client's website get their own one? Any insights or experience would be really helpful. I don't think I need the full power of Git by any means, but basic version control and de facto cloud access would be really useful.

    Read the article

  • How do I daemonify my daemon?

    - by jonobacon
    As part of the Ubuntu Accomplishments system I have a daemon that runs as well as a client that connects to it. The daemon is written in Python (using Twisted) and provides a dbus service and a means of processing requests from the clients. Right now the daemon is just a program I run before I run the client and it sets up the dbus service and provides an API that can be used by the clients. I want to transform this into something that can be installed and run as a system service for the user's session (e.g. starting on boot) and providing a means to start and stop it etc. The problem is, I am not sure what I need to do to properly daemonify it so it can run as this service. I wanted to ask if others can provide some guidance. Some things I need to ask: How can I treat it as a service that is run for the current user service (not a system service right now)? How do I ensure I can start, stop, and restart this session service? When packaging this, how do I ensure that it installs it as a service for the user's session and is started on login etc? In responding, if you can point me to specific examples or solutions I need to implement, that would be helpful. :-) Thanks! Jono

    Read the article

  • Windows recv method usage

    - by vandamon taigi
    I'm making a multiplayer game and I have an issue with the recv function ( or the send one , not sure ). Server side code : char* UserName = new char[256]; ZeroMemory(UserName,256); recv(sConnect,UserName,256,0); // works char* Password = new char[256]; ZeroMemory(Password,256); recv(sConnect,Password,256,0); // works users[ ++usercount ] = new Client(UserName,Password,sConnect); if( users[usercount] ->GetLogInSuccesful() ) send(sConnect,"0x0001",6,0); // debugging shows it gets here and sends the data. Client side code : send(server->getsConnect(),User,256,0); // works send(server->getsConnect(),Pass,256,0); // works char* Response = new char[6]; ZeroMemory(Response,6); recv(server->getsConnect(),Response,6,0); // gets stuck here. Any ideea why does it get stuck on that recv ? I also tried by making response [256] or such.

    Read the article

  • What sort of data should be sent for mouse-based movement in a multiplayer game?

    - by Daniel
    I'm new to the Multiplayer Rodeo here so please bear with me... I am just getting started and I'm trying to figure out how to deal with movement. I've looked at the question Best way to implement mouse-based movement in MMOG which gives me a pretty good idea, but I'm still struggling with what kind of data should be sent to the server. If a player is on position [x:0, y:0] and I click with the mouse on [x:40, y:40] to start movement, what information should I send to the server? Should I calculate the position based on velocity on client side and just send the expected location? Or should I send current location and velocity and direction? When the server is updating the clients on the players' whereabouts, should the position be sent only, and the clients expected to interpolate/predict movement, or can the direction sent from the client (instead of just coordinates) be used. My concern(or confusion) is regarding the ping/lag frequency of data update and use of a predictive algorithm, as I'd like the movement to be smooth even with a high latency, and prevent ability to cheat(though that's not the top priority).

    Read the article

  • Justification of Amazon EC2 Performance

    - by Adroidist
    I have a .jar file that represents a server which receives over TCP an image in bytes (of size at most 500 kb) and writes it file. It then sobels this image and sends it over TCP socket to the client side. I ran it on my laptop and it was very fast. But when I put it on Amazon EC2 server m1.large instance, i found out it is very slow - around 10 times slower. It might be the inefficiency in the code algorithm but in fact my code is nothing but receive image (like any byte file) run the sobel algorithm and send. I have the following questions: 1- Is it normal performance of Amazon EC2 server- I have read the following links link1 and link2 2- Even if the code is not that efficient, the server is finally handling a very low load (just one client), does the "inefficient" code justify such performance? 3- My laptop is dual core only...Why would the amazon ec2 server have worse performance that my laptop? How is this explained? Excuse me for my ignorance.

    Read the article

  • Protect individual sites on Ubuntu/Apache server

    - by Christoffer
    Hi,?? I need to set up a Apache server configuration for some client sites that run under the same Ubuntu 9.10 machine. All sites are allowed to run PHP, Python and Ruby on Rails. I do not control the source code of these sites and so I need to set up a filter in order to prevent one user to reach files on another users account.?? If I run a script to list files in "/" from one account, I can browse some files and directories in the actual server root. I want to set the root for each account to /var/usersite.com/www/ instead so that listing files in "/" shows the files in the client's root. ??How is this most easily configured??? Cheers!? /Christoffer

    Read the article

  • Options for transparent data encryption on SQL 2005 and 2008 DBs.

    - by Dan
    Recently, in Massachusetts a law was passed (rather silently) that data containing personally identifiable information, must be encrypted. PII is defined by the state, as containing the residents first and last name, in combination with either, A. SSN B. drivers license or ID card # C. Debit or CC # Due to the nature of the software we make, all of our clients use SQL as the backend. Typically servers will be running SQl2005 Standard or above, sometimes SQL 2008. Almost all client machines use SQL2005 Express. We use replication between client and server. Unfortunately, to get TDE you need to have SQL Enterprise on each machine, which is absolutely not an option. I'm looking for recommendations of products that will encrypt a DB. Right now, I'm not interested in whole disk encryption at all.

    Read the article

  • How to manage configuration software installations of non-domain Windows XP machines?

    - by Digi
    I have a large set of unattended Windows XP machines who are not connected to a domain or even to each other. I am struggling to find any tools out there that I can use to deal with them in one application. I am hoping to find software that I can perhaps install a client on each machine, then have it essentially proxy out configuration information and possibly commands (install, uninstall, stop service, etc) across the whole network. The closest I've come is Nagios and its client, but it cannot be used to push files through and run commands remotely. Any suggestions?

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >