Search Results

Search found 8495 results on 340 pages for 'high availability'.

Page 177/340 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • How to display my server's current response time to an average user

    - by Jason
    Sorry, I'm not really sure of the right way to ask this one so bear with me... We have a web application that runs on a set of servers at a data center (not in our offices) We want to be able to somehow 'advertise' to our clients/users that the availability or response time of our servers has met a standard throughout the day. I am being asked to come up with a standard metric that we can easily advertise on our login screen that shows current "standard response time" checked every x minutes. My thinking is that I need to capture something like the results of a traceroute from a server (either in our office, amazon, etc..) to one of the data center servers and come up with a Red/Yellow/Green type of a notifier for the login screen to let the user know that our tests are responding normally and if they are having delay issues it could be their network or connection to the internet. We have lots of clients in rural areas that have poor connectivity and we are trying to let them know any slowness might be on their end, not ours. I've got the LAMP stack to work with, but this could also be some other system all together as long as it can update the main server with the results. I already have pingdom reports that are available, but that's a bit more than people want to read sometimes. Any ideas on what I can do?

    Read the article

  • Sharing storage between servers

    - by El Yobo
    I have a PHP based web application which is currently only using one webserver but will shortly be scaling up to another. In most regards this is pretty straightforward, but the application also stores a lot of files on the filesystem. It seems that there are many approaches to sharing the files between the two servers, from the very simple to the reasonably complex. These are the options that I'm aware of Simple network storage NFS SMB/CIFS Clustered filesystems Lustre GFS/GFS2 GlusterFS Hadoop DFS MogileFS What I want is for a file uploaded via one webserver be immediately available if accessed through the other. The data is extremely important and absolutely cannot be lost, so whatever is implemented needs to a) never lose data and b) have very high availability (as good as, or better, than a local filesystem). It seems like the clustered filesystems will also provide faster data access than local storage (for large files) but that isn't of vita importance at the moment. What would you recommend? Do you have any suggestions to add or anything specifically to look out for with the above options? Any suggestions on how to manage backup of data on the clustered filesystems?

    Read the article

  • Dual usage of asp.net mvc and php under same domain

    - by jim
    Hello all, I've got a scenario where we have a customer who has a linux hosted php app (joomla) that they wish to integrate with some back-end asp.net mvc functionality that was created for a 'sister' site. Basically, the mvc site has prices and stock availability methods which (in the sister site) populates dropdown lists and other 'order' style info on the pages. I've been tasked with looking at the integration options to allow the php site to use this info as a 'service'. (as ever, these guys are looking at cost of ownership, maintenence etc, so this is their preferred route) Has anyone done anything similar with success?? I'd imagine (much like the sister site) liberal doses of ajax will be employed in order to populate portions of the page on demand. So this may have a bearing on any suggestions that you may have. Also, the methods that are being called ultimately end up populating the same database, so there are no issues with correlating the ID's across the different platforms. I don't really want to go down any 'iframe' type route if at all possible, tho' reality may dictate this as being an option. I'm possibly (naively) imagining that i could simply invoke the mvc functions directly from the php app with some sort of 'session' variable being passed for authentication. pretty tall order or pretty straightfwd?? cheers jim

    Read the article

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • Any info about book "Unix Internals: The New Frontiers" by Uresh Vahalia 2nd edition (Jan 2010)

    - by claws
    This summer I'm getting into UNIX (mostly *BSD) development. I've graduate level knowledge about operating systems. I can also understand the code & read from here and there but the thing is I want to make most of my time. Reading books are best for this. From my search I found that these two books The Design and Implementation of the 4.4 BSD Operating System "Unix Internals: The New Frontiers" by Uresh Vahalia are like established books on UNIX OS internals. But the thing is these books are pretty much outdated. yay!! Lucky me. "Unix Internals: The New Frontiers" by Uresh Vahalia 2 edition (Jan 2010) is released. I've been search for information on this book. Sadly, Amazon says "Out of Print--Limited Availability" & I couldn't find any info regarding this book. This is the information I'm looking for: Table of Contents Whats new in this edition? Where the hell can I buy soft-copy of this book? I really cannot afford buying a hardcopy. How can I contact the author? I've lot of hopes & expectations on this book. I've been waiting for its release for a long time. I've sent random mails to & & requesting to have a proper website for this book. I even contacted publisher for any further information but no replies from any one. If you have any other books that you think will help me. I again repeat, I want to get max possible out of these 2.5 months summer.

    Read the article

  • How to implement message queuing and handling in AWS with NServiceBus

    - by Pete Lunenfeld
    I am creating a new ASP MVC order application in the Amazon (AWS) cloud with the persistence layer at my local datacenter. I will be using the CQRS pattern. The goal of the project is high availability using Queue(s) to store and forward writes (commands/events) that can be picked up and handled asynchronously at my local datacenter. Then, ff the WAN or my local datacenter fails, my cloud MVC app can still take orders and just queue them up until processing can resume. My first thought was to use AWS SQS for the queuing and create my own queue consumer/dispatcher/handler in my own c# application to process the incoming messages/events. MVC (@ Amazon) -- Event/POCO -- SQS -- QueueReader (@ my datacenter) -- DB Then I found NServiceBus. NSB seems to handle lots of details very nicely: message handling, retries, error handling, etc. I hate to reinvent the wheel, and NServiceBus seems like a full featured and mature product that would be perfect for me. But on further research, it does NOT look like NServiceBus is really meant to be used over the WAN in physically separated environments (Cloud to my Datacenter). Google and SO don't really paint a good picture of using NServiceBus across the WAN like I need. How can I use NServiceBus across the WAN? Or is there a better solution to handle queuing and message handling between Amazon an my local datacenter?

    Read the article

  • How to create a platform ontop of CSLA? <-- if in case it make sense

    - by Peejay
    Hi all! here is the senario, i'm developing an application using csla 3.8 / c#.net, the application will have different modules. its like an ERP, it will have accounting, daily time record, recruitment etc as modules. Now the requirement is to check for common entities per module and build a "platform" (<- the boss calls it that way) from it. for example, DTR will have an entity "employee", Recruitment will have "Applicant" so one common entity that you can derive from both that can be put in the platform is "Person". "Person" will contain typical info like name, address, contact info etc. I know it sounds like OOP 101. the thing is, i dont know how i am to go about it. how i wish it was just a simple inheritance but the requirement is like to create an API of some sort to be used by the modules using CSLA. in csla you create smart objects right, inheriting from the base classes of csla like businessListbase, readonlylistbase etc. right? what if for example i created a businessbase Applicant class, it will have properties like salary demand, availability date etc. now for the personal info i will need the "Person" from the "platform" and implement it to the applicant class. so in summary i have several questions: how to create such platform? if such platform is possible, how will it be implemented on each module's entities? (im already inheriting from base clases of csla) if incase 1 and 2 are possible, does it have advantages on development and maintenace of the app? the reason why i'm asking #3 is because the way i see it, even if i am able to create a platform for that, i will be needing to define properties of the platform entity on my module entities so to have validation and all. im sorry if i'm typing nonesense i'm really confused. hope someone could enlighten me. thank you all!

    Read the article

  • Should I close sockets from both ends?

    - by Roman
    I have the following problem. My client program monitor for availability of server in the local network (using Bonjour, but it does not rally mater). As soon as a server is "noticed" by the client application, the client tries to create a socket: Socket(serverIP,serverPort);. At some point the client can loose the server (Bonjour says that server is not visible in the network anymore). So, the client decide to close the socket, because it is not valid anymore. At some moment the server appears again. So, the client tries to create a new socket associated with this server. But! The server can refuse to create this socket since it (server) has already a socket associated with the client IP and client port. It happens because the socket was closed by the client, not by the server. Can it happen? And if it is the case, how this problem can be solved? Well, I understand that it is unlikely that the client will try to connect to the server from the same port (client port), since client selects its ports randomly. But it still can happen (just by chance). Right?

    Read the article

  • SQL Structure of DB table with different types of columns

    - by Dmitry Dvornikov
    I have a problem with the optimization of the structure of the database. I'll try to explain it exactly. I create a project, where we can add different values??, but this values must have different types of the columns in the database (eg, int, double , varchar). What is the best way to store the different types of values ??in the database. In the project I'm using Propel 1.6. The point is availability to add value with 'int', 'varchar' and other columns types, to search the table was efficient. In total, I have two ideas. The first is to create a table of "value", which will have columns: "id ", "value_int", "value_double", "value_varchar", etc - with the corresponding column types. Depending on the type of values??, records will be saved with the value in the appropriate column (the rest will be NULL). The second solution is to create separate tables such as "value_int", "value_varchar" etc. There would be columns: "id", "value", which correspond to the relevant types of "value" (ie, such as int, varchar, etc). I must admit that I do not believe any of the above solutions, originally I was thinking about one table "value", where the column would be a "text" type - but this solution would probably be even worse. I would like to know your opinion on this topic, maybe something else would be better. Thanks in advance. EDIT: For example : We have three tables: USER: [table of users] * id * name FIELD: [table of profile fields - where the column 'type' is the type of field, eg int or varchar) * id * type * name VALUE : * id * User_id - ( FK user.id ) * Field_id - ( FK field.id ) * value So we have in each row an user in USER table, and the profile is stored in the VALUE table. Bit each profile field may have a different type (column 'type' in the FIELD table), and based on that I would want this value to add to the appropriate column of the appropriate type.

    Read the article

  • JQuery Post caused permission denied warning in IE 6 and IE 7

    - by kwokwai
    Hi all, I am using firefox 3 and IE 6, 7 to test if a simple php web page using JQuery Post to pass some data to and from another server web page. $(document).ready(function(){ $("#data\\[User\\]\\[name\\]").click(function(){ var usr=$("#data\\[User\\]\\[name\\]").val(); if(usr.length >= 4){ $("#username").append('<span id="loaderimg" name="loaderimg"><img align="absmiddle" src="loader.gif"/> Checking data availability,&nbsp;please wait.</span>'); var url = "http://mysite.com/site1/toavail/"+usr; $.post( url, function(data) {alert(data);}); }); }); //--> </script> <table border=0 width="100%"> <tr> <td>Username</td> <td> <div id="username"> <input type="text" name="data[User][name]" id="data[User][name]"> </div> </td> </tr> </table> In Firefox 3, the alert box showed empty message. In IE 6 and IE 7, I got an error message saying "Permssion denied"

    Read the article

  • web service slowdown

    - by user238591
    Hi, I have a web service slowdown. My (web) service is in gsoap & managed C++. It's not IIS/apache hosted, but speaks xml. My client is in .NET The service computation time is light (<0.1s to prepare reply). I expect the service to be smooth, fast and have good availability. I have about 100 clients, response time is 1s mandatory. Clients have about 1 request per minute. Clients are checking web service presence by tcp open port test. So, to avoid possible congestion, I turned gSoap KeepAlive to false. Until there everything runs fine : I bearly see connections in TCPView (sysinternals) New special synchronisation program now calls the service in a loop. It's higher load but everything is processed in less 30 seconds. With sysinternals TCPView, I see that about 1 thousands connections are in TIME_WAIT. They slowdown the service and It takes seconds for the service to reply, now. Could it be that I need to reset the SoapHttpClientProtocol connection ? Someone has TIME_WAIT ghosts with a web service call in a loop ?

    Read the article

  • How do i sit 2 divs left and right of eachother

    - by s32ialx
    So what i am trying to acomplish is sitting <div id="box"> left of and <div id="box2"> right of inside the container of <div id="content"> <div id="content"> <div id="box1"> <h2>Company Information</h2> <img src="images/photo-about.jpg" alt="" width="177" height="117" class="aboutus-img" /> <p color="FF6600"> some content here </p> </div> <div id="clear"></div> <div id="box" style="width:350px;"> <h2>Availability</h2> <p> some more content here </p> </div> <div id="clear"></div> <div id="box2" style="width:350px;float:left;overflow: auto;"> <h2>Our Commitment</h2> <p> some content here </p> </div> </div>

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • Why represent shopping carts and order invoices differently in a domain model?

    - by Todd
    I've built some shopping cart systems in the past, but I always designed them such that the final order invoice is just a shopping cart that has been marked as "purchased". All the logic for adding/removing/changing items in a cart is also the logic for the order. All data is stored in the same tables in the database. But it seems this is not the proper way to design an e-commerce site.. Can someone explain the benefit of separating the shopping cart from invoices in the domain model? It seems to me this would lead to a lot of duplicated code, an extra set of tables in the database, and make it harder to maintain in the event the system need to start accommodating more complicated orders (like specifying selected options for an item which may or may not change the price/availability/shipping time of the order). I'm assuming I just haven't seen the light, as every book and other example I see seems to separate these two seemingly similar concerns -- but I can't find any explanation as to the benefit of doing such! It's also the case in the systems that I design that changes are often made after the initial order is confirmed. It's not uncommon for items to be removed, replaced, or added afterwards (but prior to fulfillment).

    Read the article

  • Advice: Python Framework Server/Worker Queue management (not Website)

    - by Muppet Geoff
    I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).

    Read the article

  • AJAX help needed

    - by tharindu
    Hi guys.. i have problem in ajax.. im new comer for ajax...:) <script type="text/javascript"> $(document).ready(function() { $("#bcode").focus(); //prevents autocomplete in some browsers $("#bcode").attr('autocomplete', 'off').keyup(function(event) { var name = $("#bcode").val(); $("#status").empty(); if(name.length > 17 ) { selectAll(); $("#status").html('<img align="absmiddle" src="loading.gif" /> Checking availability...').show(); $.ajax({ type: "POST", url: "namecheck.php", data: "bcode="+ name, success: function(msg) { $("#status").html(msg).show(); } }); } else { $("#status").html('').addClass('err').show(); } }); }); //--> </script> i got text box value 'bcode' using '$_POST['bcode']' <input name="bcode" type="text" class="bcode" id="bcode" maxlength="18"; /> also i have menu/list in that form <select name="pallete" class="list_box" id="select"> <option value="P0" selected> </option> <option value="P1">P1</option> <option value="P2">P2</option> <option value="P3">P3</option> <option value="P4">P4</option> <option value="P5">P5</option> </select> How i can access selected item from php file by using '$_POST['pallete']' please help me. Thanks in advance..

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Getting hover text with selenium in java

    - by BinaryEmpire
    I am trying to figure out how to get the product availability text from a page like http://www.walmart.com/browse/TV-Video/TVs/_/N-96v3? (once a store has been selected) I selected 76574 as my zipcode and went to the "In My Store" tab The code I have now is WebElement hoverElement = driver.findElement(By.xpath(".//*[@id='Body_15992428']/span")); WebElement hidden = driver.findElement(By.xpath(".//*[@id='slapInfo_NoVariant_15992428']/div")); Actions builder = new Actions(driver); builder.clickAndHold(hoverElement).build().perform(); System.out.println(hidden.getText()); **Edit: I tried profile.setEnableNativeEvents(false); and the text is now displayed in the automated browser window. I still cannot get to the text I want though. It does not throw an exception, only displays nothing because the driver thinks its still hidden. Any one know how to fix this? I keep getting Exception in thread "main" org.openqa.selenium.InvalidElementStateException: Cannot perform native interaction: Could not load native events component. Even after I do profile.setEnableNativeEvents(true); Are there any other ways I can get the hidden text, or what am I doing wrong here? Additionally while I was inspecting the code with firebug, I saw that there is this code <script type="text/javascript"> WALMART.$(document).ready(function(){ WALMART.$('#Body_15992428').hover(function(){ WALMART.$('#SeeStoreAvailBubble').wmBubble('update',WALMART.$('#bubbleMsgUpdate_15992428').html()); }); }); </script> I dont really know how to do things directly with javascript but is there is any way of getting the message text directly from that with a javascript executor?

    Read the article

  • Fastest reliable way for Clojure (Java) and Ruby apps to communicate

    - by jkndrkn
    Hi There, We have cloud-hosted (RackSpace cloud) Ruby and Java apps that will interact as follows: Ruby app sends a request to Java app. Request consists of map structure containing strings, integers, other maps, and lists (analogous to JSON). Java app analyzes data and sends reply to Ruby App. We are interested in evaluating both messaging formats (JSON, Buffer Protocols, Thrift, etc.) as well as message transmission channels/techniques (sockets, message queues, RPC, REST, SOAP, etc.) Our criteria: Short round-trip time. Low round-trip-time standard deviation. (We understand that garbage collection pauses and network usage spikes can affect this value). High availability. Scalability (we may want to have multiple instances of Ruby and Java app exchanging point-to-point messages in the future). Ease of debugging and profiling. Good documentation and community support. Bonus points for Clojure support. What combination of message format and transmission method would you recommend? Why? I've gathered here some materials we have already collected for review: Comparison of various java serialization options Comparison of Thrift and Protocol Buffers (old) Comparison of various data interchange formats Comparison of Thrift and Protocol Buffers Fallacies of Protocol Buffers RPC features Discussion of RPC in the context of AMQP (Message-Queueing) Comparison of RPC and message-passing in distributed systems (pdf) Criticism of RPC from perspective of message-passing fan Overview of Avro from Ruby programmer perspective

    Read the article

  • How would you structure your workflow for a web application ?

    - by cx42net
    Hi ! When designing a web application (or something else), it's good to have a workflow, and it's better to have a well ordered one. Starting with this idea in mind, I'd like to know what is your process from having an idea to maintain this great working project. For me actually, the process is the following one : Having the idea Checking this project already exists, and how it works Describing on a paper its functionalities finding a good and adequate name for this (and checking the domain availability with WHOISMyProject) Making a quick layout of the project on a paper Designing the project (via TheGimp, Photoshop, etc) Making a complete mockup of each pages Developing a prototype of the client-side application (with false datas) Developing the server side Testing Making the documentation/help/faq Releasing the project Maintaining it. Would you change the order of some points ? add/remove some ? I would please to know how you do that. I'm looking to set up a perfect workflow in order to make my project become real in the best way possible. Thank you for your opinion !

    Read the article

  • iPhone app developed with SDK 4.2, requires backward compatibility with iOS 3.1.3 .. easy way?

    - by mrd3650
    I have built an iPhone app with SDK 4.2 however I know also want to make it compatible with iOS 3.1.3. First step was to set the Deployment Target to 3.1.3. It runs fine on the 3.2 Simulator but the app crashes at times since I'm using some methods which are not available in this early SDK. So my qestion is, is there a straight forward way to locate the offending methods/classes I'm using in my project which are not available in 3.1.3 ? (without manually going through each method call and consult with the docs for the SDK availability?) Thanks. UPDATE: I have executed the app on 3.1.3 and attempted to manually test each execution path with the hope of locating all exceptions. This was completed with some level of success. However, what if the application is huge? and there are lots of execution paths? There must be some tool for this scenario. Any thoughts are much appreciated.

    Read the article

  • Safe way for getting/finding a vertex in a graph with custom properties -> good programming practice

    - by Shadow
    Hi, I am writing a Graph-class using boost-graph-library. I use custom vertex and edge properties and a map to store/find the vertices/edges for a given property. I'm satisfied with how it works, so far. However, I have a small problem, where I'm not sure how to solve it "nicely". The class provides a method Vertex getVertex(Vertexproperties v_prop) and a method bool hasVertex(Vertexproperties v_prop) The question now is, would you judge this as good programming practice in C++? My opinion is, that I have first to check if something is available before I can get it. So, before getting a vertex with a desired property, one has to check if hasVertex() would return true for those properties. However, I would like to make getVertex() a bit more robust. ATM it will segfault when one would directly call getVertex() without prior checking if the graph has a corresponding vertex. A first idea was to return a NULL-pointer or a pointer that points past the last stored vertex. For the latter, I haven't found out how to do this. But even with this "robust" version, one would have to check for correctness after getting a vertex or one would also run into a SegFault when dereferencing that vertex-pointer for example. Therefore I am wondering if it is "ok" to let getVertex() SegFault if one does not check for availability beforehand?

    Read the article

  • What does 'Highest active time' for disk activity in Windows resource monitor mean?

    - by Nick R
    I know what the disk io, disk queue length and other measures are, but what does 'Highest active time' mean? Is it the amount of time it is busy handling requests, or something else? When it is high, does it mean the CPU is busy doing some IO work, or is it just indicating that the disk is busy handling requests? I'm trying to work out if 50% active time means that 50% of the time the disk is either seeking, reading or writing, rather than the kernel is spending 50% of it's time servicing IO requests. Edit Another quick data point here. If you look at the difference between an SSD and a physical disk, the SSD has significantly less activity, so I guess this really means the amount of time the operating system is waiting for the disk to respond and returning data.

    Read the article

  • Windows Server 2003 - Handling hundreds of simultaneous downloads

    - by Paul Hinett
    At the moment I have a single server with 4 1TB hard disks, daily I haver over 150 MP3 music files uploaded (around 80mb each). At busy periods there is over 300 people streaming / downloading these mixes all at once, 75% of the activity is on the most recently uploaded stuff which is all on a single hard disk. My read speads on the hard disk are very low due to such high activity of 200+ reads all happening at the same time on a single hard disk (ran some tests with HDTach). What would be a logical solution to solve this, a couple of ideas I had are: Load balance with another server Install faster hard disks (what are best these days? SCSI / SATA) Spread the most accessed files over the 4 drives so it is sharing the load between all 4 disks, instead of all the most accessed (most recent) all on the most recently installed drive. Obviouslly load balance is the most expensive option, but would it dramatically help? Some help on this situation would be great!

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >