Search Results

Search found 12448 results on 498 pages for 'offline mode'.

Page 274/498 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • How to build a simulation of a login hardware token in .Net

    - by Michel
    Hi, i have a hardware token for remote login to some citrix environment. When i click the button on the device, i get an id and i can use that to login to the citrix farm. I can click the button as much as i like, and every time a new code gets generated, and they all work. Now i want to secure my private website likewise, but not with the hardware token, but with a 'token app' on my phone. So i run an app on my phone, generate a key, and use that to (partly) authenticate myself on the server. But here's the point: i don't know how it works! How can i generate 1, 2 or 100 keys at one time which i can see (on the server) are all valid, but without the server and the phone app having contact (the hardware token also is an 'offline' solution). Can you help me with a hint how i would do this? This is what i thought of so far: the phone app and the server app know (hardcoded) the same encryption key. The phone app encrypts the current time. The server app decrypts the string to the current time and if the diff between that time and the actual server time is less than 10 minutes it's an ok. Difficult for other users to fake a key, but encryption gives such nasty strings to enter, and the hardware token gives me nice things like 'H554TU8' And this is probably not how the real hardware token works, because the server and the phone app must 'know' the same encryption key. Michel

    Read the article

  • Synchronizing one or more databases with a master database - Foreign keys

    - by Ikke
    I'm using Google Gears to be able to use an application offline (I know Gears is deprecated). The problem I am facing is the synchronization with the database on the server. The specific problem is the primary keys or more exactly, the foreign keys. When sending the information to the server, I could easily ignore the primary keys, and generate new ones. But then how would I know what the relations are. I had one sollution in mind, bet the I would need to save all the pk for every client. What is the best way to synchronize multiple client with one server db. Edit: I've been thinking about it, and I guess seqential primary keys are not the best solution, but what other possibilities are there? Time based doesn't seem right because of collisions which could happen. A GUID comes to mind, is that an option? It looks like generating a GUID in javascript is not that easy. I can do something with natural keys or composite keys. As I'm thinking about it, that looks like the best solution. Can I expect any problems with that?

    Read the article

  • ASP.NET - What is the best way to block the application usage?

    - by Tufo
    Our clients must pay a monthly Fee... if they don't, what is the best way to block the asp.net software usage? Note: The application runs on the client own server, its not a SaaS app... My ideas are: Idea: Host a Web Service on the internet that the application will use to know if the client can use the software. Issue 1 - What happen if the client internet fails? Or the data center fails? Possible Answer: Make each web service access to send a key that is valid for 7 or 15 days, so each web service consult will enable the software to run more 7 or 15 days, this way the application will only be locked after 7 or 15 days without consulting our web servicee. Issue 2 - And if the client don't have or don't want to enable internet access to the application? Idea 2: Send a key monthly to the client. Issue - How to make a offline key? Possible Answer: Generate a Hash using the "limit" date, so each login try on software will compare the today hash with the key? Issue 2 - Where to store the key? Possible Answer: Database (not good, too easy to change), text file, registry, code file, assembly... Any opinion will be very appreciated!

    Read the article

  • Document management, SCM ?

    - by tsunade
    Hello, This might not be a hard core programming question, but it's related to some of the tools used by programmers I suspect. So we're a bunch of people each with a bunch of documents and a bunch of different computers on a bunch of operating systems (well, only 2, linux and windows). The best way these documents can be stored/managed is if they were available offline (the laptop might not always be online) but also synchronized between all the machines. Having a server with extra reliable storage be a "base repository" seems like a good idea to me. Using a SCM comes to my mind and I've tried Subversion, and it seems to be a good thing that it uses a centralized repository - but: When checking out the total size of the checkout is roughly double the original size. Big files or big repositories seem to slow it down. Also I've tried rsync, which might work - but it's a bit rough when it comes to the potential conflict. Finally I've tried Unison (which is a wrapping of rsync, I think) and while it works it becomes horribly slow for the big directories we have here since it has to scan everything. So the question is - is there a SCM tool out there that is actually practial to use for a big bunch of both small and big files? If thats a NO - does anyone know other tools that do this job? Thanks for reading :)

    Read the article

  • How do people handle foreign keys on clients when synchronizing to master db

    - by excsm
    Hi, I'm writing an application with offline support. i.e. browser/mobile clients sync commands to the master db every so often. I'm using uuid's on both client and server-side. When synching up to the server, the servre will return a map of local uuids (luid) to server uuids (suid). Upon receiving this map, clients updated their records suid attributes with the appropriate values. However, say a client record, e.g. a todo, has an attribute 'list_id' which holds the foreign key to the todos' list record. I use luids in foreign_keys on clients. However, when that attribute is sent over to the server, it would dirty the server db with luids rather than the suid the server is using. My current solution, is for the master server to keep a record of the mappings of luids to suids (per client id) and for each foreign key in a command, look up the suid for that particular client and use the suid instead. I'm wondering wether others have come across thus problem and if so how they have solved it? Is there a more efficient, simpler way? I took a look at this question "Synchronizing one or more databases with a master database - Foreign keys (5)" and someone seemed to suggest my current solution as one option, composite keys using suids and autoincrementing sequences and another option using -ve ids for client ids and then updating all negative ids with the suids. Both of these other options seem like a lot more work. Thanks, Saimon

    Read the article

  • GPS not update location after close and reopen app on android

    - by mrmamon
    After I closed my app for a while then reopen it again,my app will not update location or sometime it will take long time( about 5min) before update. How can I fix it? This is my code private LocationManager lm; private LocationListener locationListener; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); lm = (LocationManager) getSystemService(Context.LOCATION_SERVICE); locationListener = new mLocationListener(); lm.requestLocationUpdates( LocationManager.GPS_PROVIDER, 0, 0, locationListener); } private class myLocationListener implements LocationListener { @Override public void onLocationChanged(Location loc) { if (loc != null) { TextView gpsloc = (TextView) findViewById(R.id.widget28); gpsloc.setText("Lat:"+loc.getLatitude()+" Lng:"+ loc.getLongitude()); } } @Override public void onProviderDisabled(String provider) { // TODO Auto-generated method stub TextView gpsloc = (TextView) findViewById(R.id.widget28); gpsloc.setText("GPS OFFLINE."); } @Override public void onProviderEnabled(String provider) { // TODO Auto-generated method stub } @Override public void onStatusChanged(String provider, int status, Bundle extras) { // TODO Auto-generated method stub } }

    Read the article

  • C# Type conversion between two similar Datatable objects

    - by Ali
    I have .NET project with sync framework and two separate Datasets for MS SQL and Compact SQL. in my base class I have a generic DataTable object. in my derived classed I assign Typed DataTable to the generic object based on whether the application is operating online or offline: example: if (online) _dataTable = new MSSQLDataSet.Customer; else _dataTable = new CompactSQLDataSet.Customer; Now every where in my code i have to check and do a cast based on the current network mode like this: public void changeCustomerID(int ID) { if (online) (MSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; else (CompactMSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; } but I don't think this is very efficient and I believe it can be done in a smarter way to only use one line of code by dynamically getting the Type of _dataTable on the run time. my problem is at the design time, in order to acess datatable porperties such as "CustomerID" it has to be casted to either MSSQLDataSet.CustomerDataTable or CompactMSSQLDataSet.CustomerDataTable. Is there a way to have a function or a operator to convert the _datatable to its runtime type but still be able to use it's design time properties which are the same between the two types? something like: ((aType)_dataTable)[i].CustomerID = value; //or GetRuntimeType(_dataTable)[i].CustomerID = value;

    Read the article

  • Installing a group of .deb files in Ubuntu

    - by p00ya
    Hi. I have a directory of .deb files which I copied from the cache folder of apt. There are many applications and Ubuntu updates among them. but there's no dependency failure because they were all downloaded by 'add/remove applications' and 'update manager' automatically. Now I have installed the same version of Ubuntu (9.04) and I want to install those apps and updates again(though they are not new versions). In other words, I want to make this fresh Ubuntu install exactly like the old one but without downloading any thing and using only those .deb files that I copied. All I have is an archive folder containing the .deb files and a 'pkgcache.bin' file. I know I can double-click the .deb files and install them manually but then I have to find out and follow the dependencies one by one from the installer errors. I have also tried adding an offline repository but it didn't work. I think because all of my .deb's are in on folder, and there is no separate 'main', 'restricted', ... folder?! Is there a way to do all of this automatically? thanks

    Read the article

  • facebook graph api does not return all feed items on facebook page

    - by Nick Franceschina
    at the time of this question, if you go here: http://www.facebook.com/realplayer you'll see six posts down, I have posted a photo with a message of "#highfive Cincinnati, OH" but if you to either of these: http://graph.facebook.com/realplayer/feed http://graph.facebook.com/realplayer/tagged the JSON that is returned seemingly includes everything on the wall, except for MY post. there is another photo post from someone else down below mine, and it is showing up (and both my photo and his photo are in the "Fan photos" section) obviously, since I can see everything with these links already, it appears that access_token is not a part of the equation... BUT, some more info: if I use an access_token from a session that isn't me, I can't see the post in the JSON if I use an access_token from MY logged in session, then I DO see the post in the JSON so I'm very confused. if everyone in the world can see those posts on the wall without even authenticating, then I expect all of them to come back in the graph api as well. anyone have thoughts on this? I am aware of the "manage_page" permission... which I can use to get a list of accounts and special offline access tokens for those pages... and that's something I can explore... but it seems like alot of work when my post seemingly SHOULD be there in the graph

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • How to design authentication in a thick client, to be fail safe?

    - by Jay
    Here's a use case: I have a desktop application (built using Eclipse RCP) which on start, pops open a dialog box with 'UserName' and 'Password' fields in it. Once the end user, inputs his UserName and Password, a server is contacted (a spring remote-servlet, with the client side being a spring httpclient: similar to the approaches here.), and authentication is performed on the server side. A few questions related to the above mentioned scenario: If said this authentication service were to go down, what would be the best way to handle further proceedings? Authentication is something that I cannot do away with. Would running the desktop client in a "limited" mode be a good idea? For instance, important features/menus/views will be disabled, rest of the application will be accessible? Should I have a back up authentication service running on a different machine, working as a backup? What are the general best-practices in this scenario? I remember reading about google gears and how it would let you edit and do stuff offline - should something like this be designed? Please let me know your design/architectural comments/suggestions. Appreciate your help.

    Read the article

  • Building an 'Activation Key' Generator in JAVA

    - by jax
    I want to develop a Key generator for my phone applications. Currently I am using an external service to do the job but I am a little concerned that the service might go offline one day hence I will be in a bit of a pickle. How authentication works now. Public key stored on the phone. When the user requests a key the 'phone ID' is sent to the "Key Generation Service" and the encrypted key key is returned and stored inside a license file. On the phone I can check if the key is for the current phone by using a method getPhoneId() which I can check with the the current phone and grant or not grant access to features. I like this and it works well, however, I want to create my own "Key Generation Service" from my own website. Requirements: Public and Private Key Encryption:(Bouncy Castle) Written in JAVA Must support getApplicationId() (so that many applications can use the same key generator) and getPhoneId() (to get the phone id out of the encrypted license file) I want to be able to send the ApplicationId and PhoneId to the service for license key generation. Can someone give me some pointers on how to accomplish this? I have dabbled around with some java encryption but am definitely no expert and can't find anything that will help me. A list of the Java classes I would need to instantiate would be helpful.

    Read the article

  • Google Data Api returning an invalid access token

    - by kingdavies
    I'm trying to pull a list of contacts from a google account. But Google returns a 401. The url used for requesting an authorization code: String codeUrl = 'https://accounts.google.com/o/oauth2/auth' + '?' + 'client_id=' + EncodingUtil.urlEncode(CLIENT_ID, 'UTF-8') + '&redirect_uri=' + EncodingUtil.urlEncode(MY_URL, 'UTF-8') + '&scope=' + EncodingUtil.urlEncode('https://www.google.com/m8/feeds/', 'UTF-8') + '&access_type=' + 'offline' + '&response_type=' + EncodingUtil.urlEncode('code', 'UTF-8') + '&approval_prompt=' + EncodingUtil.urlEncode('force', 'UTF-8'); Exchanging the returned authorization code for an access token (and refresh token): String params = 'code=' + EncodingUtil.urlEncode(authCode, 'UTF-8') + '&client_id=' + EncodingUtil.urlEncode(CLIENT_ID, 'UTF-8') + '&client_secret=' + EncodingUtil.urlEncode(CLIENT_SECRET, 'UTF-8') + '&redirect_uri=' + EncodingUtil.urlEncode(MY_URL, 'UTF-8') + '&grant_type=' + EncodingUtil.urlEncode('authorization_code', 'UTF-8'); Http con = new Http(); Httprequest req = new Httprequest(); req.setEndpoint('https://accounts.google.com/o/oauth2/token'); req.setHeader('Content-Type', 'application/x-www-form-urlencoded'); req.setBody(params); req.setMethod('POST'); Httpresponse reply = con.send(req); Which returns a JSON array with what looks like a valid access token: { "access_token" : "{access_token}", "token_type" : "Bearer", "expires_in" : 3600, "refresh_token" : "{refresh_token}" } However when I try and use the access token (either in code or curl) Google returns a 401: curl -H "Authorization: Bearer {access_token}" https://www.google.com/m8/feeds/contacts/default/full/ Incidentally the same curl command but with an access token acquired via https://code.google.com/oauthplayground/ works. Which leads me to believe there is something wrong with the exchanging authorization code for access token request as the returned access token does not work. I should add this is all within the expires_in time frame so its not that the access_token has expired

    Read the article

  • How to remove a "green screen" portrait background

    - by danbystrom
    I'm looking for a way to automatically remove (=make transparent) a "green screen" portrait background from a lot of pictures. My own attempts this far have been... ehum... less successful. I'm looking around for any hints or solutions or papers on the subject. Commercial solutions are just fine, too. And before you comment and say that it is impossible to do this automatically: no it isn't. There actually exists a company which offers exactly this service, and if I fail to come up with a different solution we're going to use them. The problem is that they guard their algorithm with their lives, and therefore won't sell/license their software. Instead we have to FTP all pictures to them where the processing is done and then we FTP the result back home. (And no, they don't have an underpaid staff hidden away in the Philippines which handles this manually, since we're talking several thousand pictures a day...) However, this approach limits its usefulness for several reasons. So I'd really like a solution where this could be done instantly while being offline from the internet.

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • Error monitoring/handling on webservers

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • Getting the full-name of the current user, returns an empty string (C#/C++)

    - by Nir
    I try to get the full-name of the current log-in user (Fullname, not username). The following code C#, C++ works fine but on XP computers not connected to the Net, I get empty string as result if I run it ~20 minutes after login (It runs OK whithin the first ~20 minutes after login) A Win32 API (GetUserNameEx) is used rather that PrincipalContext since it PrincipalContext may takes up to 15 seconds when working offline. Any Help why am I getting an empty string as result though a user full name is specified??? - C# Code public static string CurrentUserFullName { get { const int EXTENDED_NAME_FORMAT_NAME_DISPLAY = 3; StringBuilder userName = new StringBuilder(256); uint length = (uint) userName.Capacity; string ret; if (GetUserNameEx(EXTENDED_NAME_FORMAT_NAME_DISPLAY, userName, ref length)) { ret = userName.ToString(); } else { int errorCode = Marshal.GetLastWin32Error(); throw new Win32Exception("GetUserNameEx Failed. Error code - " + errorCode); } return ret; } } [DllImport("Secur32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern bool GetUserNameEx(int nameFormat, StringBuilder lpNameBuffer, ref uint lpnSize); - Code in C++ #include "stdafx.h" #include <windows.h> #define SECURITY_WIN32 #include <Security.h> #pragma comment( lib, "Secur32.lib" ) int _tmain(int argc, _TCHAR* argv[]) { char szName[100]; ULONG nChars = sizeof( szName ); if ( GetUserNameEx( NameDisplay, szName, &nChars ) ) { printf( "Name: %s\n", szName); } else { printf( "Failed to GetUserNameEx\n" ); printf( "%d\n", GetLastError() ); } return 0; }

    Read the article

  • UI - How I can make users effectively read what my program says?

    - by Magnetic_dud
    I have a simple form that searches through the 2000+ issues of a 3rd party webcomic. (Easy, it's like xkcd: http://url/number That form is as easy as possible, is like this: What number do you want? User writes a number, clicks ok, and goes on the 3rd party website on a new tab Then, my form asks a question: "Did you find that issue memorable? Enter the name here, and we will add it to the "best issues" in home page" When the user will write the name of the issue, it is added to the database (pending moderation by me) So, I supposed this design is the easiest and convenient that users can find. Unfortunately, NONE of the users (maybe a 2% behaved correctly) will actually read what I asked. Some of the issues are offline, and gives a 404. On that issues users will write in the textbox a completely wrong title, and correctly capitalized! It's like if i would name http://xkcd.com/627/ as "The Great Adventures of Jack Smith" Users are from around all over the country, with different browsers, and have a different cookie. I cannot believe that my users will not read what I ask, it is a WHITE PAGE with a button that disappears when clicked and a textbox.... easier than that??? Maybe i should put a checkbox with "I acknowledge that this form is for submitting memorable issues, not for fun"? Oh, who will read that? Or maybe i could enable the textbox only if the user has effectively clicked the link?

    Read the article

  • Error handling approach on PHP

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • WCF with MANY database connections

    - by Jorge Dominguez
    I'm working in the development of an ERP type .Net WinForms application consuming a WCF service. It's to be used by many small companies (in the range of 100-200). Database is SQL Server 2008 and the service will be hosted as a Windows service. Even thought there will be a single DB Server, our customer insists in having separate databases for each company. That is because of stability/support concerns (like DB being damaged or took offline for some reason thus affecting all clients). Concerns coming from previous experiences (not necessarily with same platform). With a single database, connections to the DB would be opened at service start up and pooling used, but, I'm not sure how connections could be managed in a multiple DB scenario: Could a connection to the corresponding DB be opened and closed for each service request? would performance be acceptable? If a connection is opened and maintained for each company accessing the system, what's the practical limit of opened connections (to different databases)? It would be very interesting to hear your opinions and suggestions for this situation. Tanks

    Read the article

  • Advice: Python Framework Server/Worker Queue management (not Website)

    - by Muppet Geoff
    I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Are there any changes in the licensing of Visual Studio 2013 Express editions?

    - by Ramón García-Pérez
    As was going through reading the license.htm file provided as part of the VS2013_RTM_WebExp_ENU.iso offline installation media for the Visual Studio 2013 Express for Web, section 6 reads as follows: 6. PACKAGE MANAGER AND THIRD PARTY SOFTWARE INSTALLATION FEATURES. The software includes the following features (each a “Feature”), each of which enables you to obtain software applications or packages through the Internet from other sources: Extension Manager, New Project Dialog, Web Platform Installer, and Microsoft NuGet-Based Package Manager. Those software applications and packages are offered and distributed in some cases by third parties and in some cases by Microsoft, but each such application or package is under its own license terms. Microsoft is not developing, distributing or licensing any of the third-party applications or packages to you, but instead, as a convenience, enables you to use the Features to access or obtain those applications or packages directly from the third-party application or package providers. By using the Features, you acknowledge and agree that: you are obtaining the applications or packages from such third parties and under separate license terms applicable to each application or package (including, with respect to the package-manager Features, any terms applicable to software dependencies that may be included in the package); MICROSOFT MAKES NO REPRESENTATIONS, WARRANTIES OR GUARANTEES AS TO THE FEED OR GALLERY URL, ANY FEEDS OR GALLERIES FROM SUCH URL, THE INFORMATION CONTAINED THEREIN, OR ANY SOFTWARE APPLICATIONS OR PACKAGES REFERENCED IN OR ACCESSED BY YOU THROUGH SUCH FEEDS OR GALLERIES. MICROSOFT GRANTS YOU NO LICENSE RIGHTS FOR THIRD-PARTY SOFTWARE APPLICATIONS OR PACKAGES THAT ARE OBTAINED USING THE FEATURES. Are there any changes in the licensing of Visual Studio 2013 Express editions? If so, does this means that Visual Studio extensions installation in Express Editions is now allowed? PS: Previous versions of the Express editions did not allow the installation of extensions as per "EULA/TOS" discussed here: Limitations of Visual Studio 2012 Express Desktop

    Read the article

  • What happens if I just add a second IP to a domain?

    - by tntu
    We have two servers that are in constant sync. We have two applications that connect to them. Each app to different server. We devised a new version of those apps that will read a dns entry and get a list of IP addresses and try them in order. Now problem is old apps. We have noticed that some ppl still use the old ones even if we have released the new. If we were to add two IP's to each domain would they receive the IP's in the order we set them or random? Either way it will still work for us but I'm just curious. If first server goes offline will the client application try the other? To be noted for old version: Interruption does not affect in any way the continuation once connection is reestablished. Each communication is independent of previous ones. Applications connect at set intervals of time anywhere between 5 seconds to 1 hour. Connection is done simply using an http post to the URL in question.

    Read the article

  • Database solution for 200million writes/day, monthly summarization queries

    - by sb
    Hello. I'm looking for help deciding on which database system to use. (I've been googling and reading for the past few hours; it now seems worthwhile to ask for help from someone with firsthand knowledge.) I need to log around 200 million rows (or more) per 8 hour workday to a database, then perform weekly/monthly/yearly summary queries on that data. The summary queries would be for collecting data for things like billing statements, eg. "How many transactions of type A did each user run this month?" (could be more complex, but that's the general idea). I can spread the database amongst several machines, as necessary, but I don't think I can take old data offline. I'll definitely need to be able to query a month's worth of data, maybe a year. These queries would be for my own use, and wouldn't need to be generated in real-time for an end-user (they could run overnight, if needed). Does anyone have any suggestions as to which databases would be a good fit? P.S. Cassandra looks like it would have no problem handling the writes, but what about the huge monthly table scans? Is anyone familiar with Cassandra/Hadoop MapReduce performance?

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >