Search Results

Search found 28957 results on 1159 pages for 'single instance'.

Page 224/1159 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Is it secure to store the cert/key on a private AMI?

    - by Phillip Oldham
    Are there any major security implications to bundling a private AMI which contains the private key/certificate & environment variables? For resiliency I'm creating an EC2 image which should be able to boot and configure itself without any intervention. After boot it will attempt to: Attach & mount specific EBS volume(s) Associate a specific Elastic IP Start issuing backups of the EBS volume(s) to S3 However, to do this it will need the private key/pem files and will need certain environment variables to be available on start-up. Since this is a private AMI I'm wondering if it will be "safe" to store these variables/files directly in the image so that I don't need to specify any user-data information and can therefore start a new instance remotely (from my iPhone, if needed) should the instance be terminated for any reason.

    Read the article

  • MySQL slow query log logging all queries

    - by Blanka
    We have a MySQL 5.1.52 Percona Server 11.6 instance that suddenly started logging every single query to the slow query log. The long_query_time configuration is set to 1, yet, suddenly we're seeing every single query (e.g. just saw one that took 0.000563s!). As a result, our log files are growing at an insane pace. We just had to truncate a 180G slow query log file. I tried setting the long_query_time variable to a really large number to see if it stopped altogether (1000000), but same result. show global variables like 'general_log%'; +------------------+--------------------------+ | Variable_name | Value | +------------------+--------------------------+ | general_log | OFF | | general_log_file | /usr2/mysql/data/db4.log | +------------------+--------------------------+ 2 rows in set (0.00 sec) show global variables like 'slow_query_log%'; +---------------------------------------+-------------------------------+ | Variable_name | Value | +---------------------------------------+-------------------------------+ | slow_query_log | ON | | slow_query_log_file | /usr2/mysql/data/db4-slow.log | | slow_query_log_microseconds_timestamp | OFF | +---------------------------------------+-------------------------------+ 3 rows in set (0.00 sec) show global variables like 'long%'; +-----------------+----------+ | Variable_name | Value | +-----------------+----------+ | long_query_time | 1.000000 | +-----------------+----------+ 1 row in set (0.00 sec)

    Read the article

  • Automating the Backup of a SQL Server 2008 Express Database

    - by JaydPage
    Steps Involved: 1) Create a Database Backup Script. 2) Create a Scheduled Task To Run the Backup Script. 1 Create a Database Backup Script. a) Download and install SQL Server Management Studio. This is a free tool available on the Microsoft website. b) Once Management Studio is installed launch it and connect to the SQL server instance that contains the database that you want to back up. c) Right click on the database and then in the menu choose Tasks -> Back up... d) This will open up a window where you can choose your backup options, once you are happy with the options click on the "Script" button near the top and select the "Script Action to File" option. e) Save the File. 2 Create a Schedule Task to Run the Backup Script a) Open up Windows Task Scheduler. b) Create a new Task using the wizard, when asked to select a program browse to C:\Program Files\Microsoft SQL Server\100\Tools\binn\SQLCMD.exe c) There are 2 arguments that need to be set: -S \SERVER_INSTANCE_NAME  -i "PATH_OF_SQLBACKUP_SCRIPT" where SERVER_INSTANCE_NAME  is the name of the instance of SQL server that contains your database e.g. (local) and PATH_OF_SQLBACKUP_SCRIPT is the path of your backup script e.g. "C:\Program Files\Microsoft SQL Server\DatastoreBackup.sql" d) Adjust the task to run at the desired times and you are done.

    Read the article

  • Now Available:Oracle Utilities Customer Care & Billing Version 2.4.0 SP1

    - by Roxana Babiciu
    We are pleased to announce the general availability of Oracle Utilities Customer Care & Billing 2.4.0 SP1. Key Features & Benefits: Oracle Utilities Customer Care & Billing 2.4.0 SP1 includes several base enhancements and a new licensable module called Customer Program Management. Key base enhancements in this release are: Configuration Migration Assistant (Additional Migration Plans) – Configuration Migration Assistant (CMA) was introduced in Oracle Utilities Application Framework V4.2.0 to supersede the ConfigLab facility. Oracle Utilities Customer Care and Billing now has a large number of migration plans to support migrating administration objects between environments. Encryption – Ability to configure encryption for fields that store sensitive data such as credit card numbers, bank account numbers, social security numbers, and MICR ID. Single Euro Payments Area (SEPA) Direct Debit – Functionality for configuring recurring direct debit payments in accordance with the Single Euro Payments Area (SEPA) initiative. Usage Enhancement for Bill Print – Allows additional information to be captured on a usage request to support billing when meter reads are not obtained from Oracle Utilities Customer Care & Billing but from a meter data management system (e.g. Oracle Utilities Meter Data Management). Preferences Portal – Communication preference zones allowing utilities to track customers’ preferred communication channels for various types of notifications or communications (e.g. phone, SMS, email). More information can be found on OPN!

    Read the article

  • I need some MySQL lookup table advice

    - by Gary Beam
    I have a MySQL database with about 200 tables. 50 of these are small 2-field 'id-data' lookup tables. Several of these DB's are hosted on a shared server. I have been informed that I need to reduce the total number of tables in the shared hosting environment because of performance issues relating to too many tables. My question is: Could/Should the 50 2-Field lookup tables be combined into a single 3-field table with 'id-field_name-data' Fields? Even if this can be done, I will have a lot of work to do on the PHP user application. My other choice is moving the DB's to a dedicated server at much higher hosting cost. I don't believe my 200 table DB's are actually causing any performance issues on this shared hosting server, at least not from the user application standpoint. There are never more than 10 of these tables joined in any single query; although I have seen some very-slow queries generated by phpmyadmin on these DB's.

    Read the article

  • Reply to mailman archived message

    - by Jasper
    (not exactly sure if this is the right place for this question, but I trust you will migrate it if it isn't) I was having a problem with gdb, and while there issue appears to be recurring, I found only one instance of someone recently experiencing the same problem. I found this other instance on a mailman archived mailing list. Then I tried some more things and finally solved the issue with gdb. So, now I want to report back the solution I found to the mailing list. However, this is really only of use if mailman recognizes my mail as being the same thread as the original problem, but I do not have that mail (just the online archived version of it) so I cannot reply to it. My question: How can I make sure mailman considers my mail as a reply to that thread? Is simply copying the topic enough?

    Read the article

  • Impact of Server Failure on Coherence Request Processing

    - by jpurdy
    Requests against a given cache server may be temporarily blocked for several seconds following the failure of other cluster members. This may cause issues for applications that can not tolerate multi-second response times even during failover processing (ignoring for the moment that in practice there are a variety of issues that make such absolute guarantees challenging even when there are no server failures). In general, Coherence is designed around the principle that failures in one member should not affect the rest of the cluster if at all possible. However, it's obvious that if that failed member was managing a piece of state that another member depends on, the second member will need to wait until a new member assumes responsibility for managing that state. This transfer of responsibility is (as of Coherence 3.7) performed by the primary service thread for each cache service. The finest possible granularity for transferring responsibility is a single partition. So the question becomes how to minimize the time spent processing each partition. Here are some optimizations that may reduce this period: Reduce the size of each partition (by increasing the partition count) Increase the number of JVMs across the cluster (increasing the total number of primary service threads) Increase the number of CPUs across the cluster (making sure that each JVM has a CPU core when needed) Re-evaluate the set of configured indexes (as these will need to be rebuilt when a partition moves) Make sure that the backing map is as fast as possible (in most cases this means running on-heap) Make sure that the cluster is running on hardware with fast CPU cores (since the partition processing is single-threaded) As always, proper testing is required to make sure that configuration changes have the desired effect (and also to quantify that effect).

    Read the article

  • How do I force excel (and other office products) to stop opening files in the same application?

    - by KronoS
    Whenever I "double click" on an Excel file and another Excel file is open, the newly opened file automatically opens in the same application window as the previously opened Excel file. This isn't limited to just Excel, as I've seen Word do this as well. This poses a problem when wanting to compare documents side by side. The current solution I have for this is to actually open another Excel or Word instance, and then open the file from within that application window itself. Is there a way to force Office to open a new instance of the application when double clicking on the file icons? I'm currently using Office 2007 and Windows XP but I've seen this on Office 2010 and Windows Vista and 7. I'm looking for an overall solution if possible.

    Read the article

  • Clone a Red Hat RAID as part of a disaster recovery plan

    - by Campo
    I am looking for recommendations to clone a Red Hat mirrored raid to a single hard drive located in the same machine. The idea is if the servers hardware ever has an issue we have a similar hardware machine ready to go. All we would have to do is pop in the cloned drive. If the servers RAID ever failed we could just switch to the single drive to maintain uptime and restore the original configuration on the spare server with a backup. This is a restaurant and they are open 7 days a week. We do have time from 12:am to 9:00am to perform the necessary steps for a clone and we talking about under 10 Gigs of information. There is a database on the server. I have looked into Rsync and Clonezilla. But I am just not confident either is capable of completing the task I want. Looking for some suggestions and possibly a step by step if you could be so kind.

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • Storing bundled AMI:s at Amazon EC2

    - by Industrial
    Hi everybody, I am totally new on configuring servers and working with EC2, so please bare with me. I managed after a lot of hair pulling to get a server with Ubuntu up and running with memcached and some other goodies that would make a great package for me. I thought that however, when storing it as an AMI with this tool I would be able to have memcached available next time I launched an instance based upon that image. What can I do to make sure that my configuration is saved properly to an instance? Question number two: - Can I someway make a command that is automatically run on server creation, like initiating memcache with "memcache -d -m 1700 -u root" or even a batch of them?

    Read the article

  • Enable a program in Windows to run multiple times?

    - by user135490
    I've got this legacy software that only allows you to run one copy at a time, it detects that you have another session opened and it won't allow you to open a second instance. The problem is this is a cpu intensive program and it only use a single core. Is there any hacks or tweaks so I can trick it and open more than one instance? This would allow me to retire about 5 servers... I'm using Windows 2008 R2. I had to use cff explorer to enable the use more than 2GB RAM as the program crashes when it tries to use more than 2GB.

    Read the article

  • recommendation for configuration for a multi-core guestOS

    - by reidLinden
    Hi there, I've just received an upgraded Host machine, and am looking to push some of those advances to my workstations Guest OS(s). In particular, I used to have a single processor, with 2 cores, so my guestOS only had 1/1. Now, I've got a single processor with 8 cores, so I'm curious about what would be recommended for my GuestOS now? 1 processor/4 cores? 2 processors/2Cores? 4 processors/1 core? My instinct says to stick with the number of physical processors (or less), but, is that based on reality? I spent a good while looking for an answer to this, but perhaps my google-karma isn't in my favor today. Suggestions?

    Read the article

  • Setting up a Google Analytics Campaign

    - by Ashfame
    I will be doing a bunch of things to give one of my projects (main app) a big initial push for which I will be building a few small Facebook apps which will help in promoting the main apps. Traffic from these apps need to be tracked individually. My main app will be posting on the walls when the user needs to be notified. Traffic from these posts need to be tracked. Traffic from emails sent by the main app need to be tracked, like different types of email. I need to track all of these & possibly a couple of more but I need to be sure that I build my campaign URLs correctly as I won't get another chance to fix it. Correct me where I am wrong: Campaign Name: Launch Campaign Medium: Email Campaign Source: Type1 or Type2 (I can break it down for different types of email, right?) For apps: Campaign Name: Launch Campaign Medium: Apps Campaign Source: App1 or App2 (I can break it down here for different apps, right?) What if I want to track two different links within a single email or a single app? Any way of tracking them individually too but still keeping to track them as one because tracking them as one makes more sense for me. Campaign Term & Campaign content is irrelevant in my case, or I can/should use them for something? And I will also be tracking traffic of different apps. Should I do more? Let me know if my scenario wasn't clear enough & I need to explain more.

    Read the article

  • LDAP encrypt attribute that extends userpassword

    - by Foezjie
    In my current LDAP schema I have an objectclass (let's call it group) that has 2 attributes that extend userpassword. Like this: attributeType ( groupAttributes:12 NAME 'groupPassword1' SUP userPassword SINGLE-VALUE ) attributeType ( groupAttributes:13 NAME 'groupPassword2' SUP userPassword SINGLE-VALUE ) group extends organisation so already has a userpassword attribute. If I use that to enter a new group using PHPLDAPAdmin it uses SSHA (by default) and encrypts/hashes the password I entered. But the passwords I entered for groupPassword1 en groupPassword2 don't get encrypted. Is there a way to make it so that those attributes are encrypted too?

    Read the article

  • Common filesystem for servers behind a rackspace load balancer

    - by thanos panousis
    Our PHP application consists of a single web server that will receive files from clients and perform a CPU-intensive analysis on them. Right now, analysis of a single user upload can take 3sec to conclude and take 100% CPU. This makes our system capacity amount to 1/3 requests per second. My team's requirement is to increase capacity without a lot of code reengineering. A possible solution would be to set up a load balancer in front of multiple servers running the same app, connecting to a common DB. The problem is that the analysis outputs files on disk. A load balancer would increase capacity, but then files won't be available between servers so consequent client requests may fail. We are hosted on Rackspace, is there a way to configure some sort of "common" storage for all servers, without having to rewrite our file persistance code? Current code relies on simple fopens etc. What are our options?

    Read the article

  • Select firefox search result

    - by Nicolas C.
    I am working on a daily basis on a web application with very large menus. Also doing lots of other Excel manipulations, copy and pasting, etc., I am quite fond of keyboard shortcuts as much faster than using the mouse to point, double-click and then going back to my keyboard etc. Hence, my question is quite simple, does anyone know if there is any shortcut under Firefox which would let me actually select (and not highlight) in my web page the search result so that I can for instance do the following manipulation sequence? [Ctrl]+[F] type the search string, for instance 'regional_unit' the missing shortcut to actually select in my page the string which is currently highlighted thanks to the search feature of FF [Space] or [Enter] key to activate the web element which in my case would systematically correspond to a link or button, etc. May be there would be an addon replacing the default search feature, I don't know... I tried to look over the internet but with the words I am using for this investigation, I do not get relevant search results under Google :(. Thanks a lot

    Read the article

  • Multiple WAPs: Bandwidth, Frequency Considerations

    - by Pete Cresswell
    The router in my LAN closet does 2 and 5 GHz. In the kitchen, I have a single-band 2 GHz WAP, and in the garden shed I have another single-band 2 GHz WAP. All are set to Bandwidth = 40 MHz, Wireless Network Mode = N-Only. The kitchen WAP and the LAN closet router both come up with multiple bars on my smart phone from almost anywhere in the house. The garden shed WAP will register one bar... but only sometimes. The Questions: Are these things in danger of butting heads? Should I re-set them to Bandwidth = 20 MHz? Bandwidth = Auto? Are there any tools that I could use on an Android smart phone, iPod, or WiFi-enabled laptop to make my own analysis?

    Read the article

  • Using a CDN for CMS software (multiple sites)

    - by SmokeyPHP
    I'm currently researching ideas for the media management side of a CMS I'm writing. I was looking at having images served from a CDN which is fine on a single site, but I want all sites that run the CMS to make use of a CDN (which will most likely be a custom developed one, rather than a third party service like S3). My main question is: Is a multi-site CDN a good idea? I can't think of a downside, but have probably missed something - obviously they won't share the same folder, as I invisage the requests to be css.cdnsite.com/example.com/style.css or something along those lines. Having multiple sites in the same place will obviously make it easier for us to manage, as well as being cheaper, but then I wonder if it'll be worth it... Long story short: How should the CMS handle user uploaded media (separate installations) Just keep a local copy of all assets and serve them from the same site, like in days of yore? Keep a local copy, force site to use www. and have CDN subdomains per site? Or use a single separate CDN for all sites? Apologies for the length of this question, not sure if this should be multiple questions or not, as all parts are kind of related and could affect each other.

    Read the article

  • Running SQL 2008 on a VM

    - by chris.w.mclean
    We are pondering trying to set up a SQL 2008 instance inside a VM for a production environment. All our SQL instances use iSCSI over gigabit ethernet to talk to a NAS, as would this new instance. Any reason this is a bad idea or any considerations to make this work well? The VM would be running in Xen 5.5 or we could set it up in Hyper-V if there's a compelling case for that. And the VM's VHD would be stored on a different NAS then the SQL storage is on.

    Read the article

  • My windows keyboard is being "clever" with the quote keys - how can I stop it?

    - by Marcin
    I'm using windows 7 on a laptop. On the laptop keyboard, for some reason, the quote key (which has both double and single quote on it) is doing some "clever" annoying things: When I press single-quote (or double-quote), windows doesn't send any characters until I press it twice (resulting in '' or "") When I press it before a vowel, I get some kind of accented character. As I usually only write English, this is annoying. The backtick/tilde key is subject to similar behaviour. I have not attempted to set up my computer to process anything other than English. My keyboard appears to be (in so far as these things are standard on laptops) a standard US qwerty keyboard. How can I stop this happening?

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • Multi Threading - How to split the tasks

    - by Motig
    if I have a game engine with the basic 'game engine' components, what is the best way to 'split' the tasks with a multi-threaded approach? Assuming I have the standard components of: Rendering Physics Scripts Networking And a quad-core, I see two ways of multi-threading: Option A ('Vertical'): Using this approach I can allow one core for each component of the engine; e.g. one core for the Rendering task, one for the Physics, etc. Advantages: I do not need to worry about thread-safety within each component I can take advantage of special optimizations provided for single-threaded access (e.g. DirectX offers a flag that can be set to tell it that you will only use single-threading) Option B ('Horizontal'): Using this approach, each task may be split up into 1 <= n <= numCores threads, and executed simultaneously, one after the other. Advantages: Allows for work-sharing, i.e. each thread can take over work still remaining as the others are still processing I can take advantage of libraries that are designed for multi-threading (i.e. ... DirectX) I think, in retrospect, I would pick Option B, but I wanted to hear you guys' thoughts on the matter.

    Read the article

  • Complex string matching with fuzzywuzzy

    - by That1Guy
    I'm attempting to write a process that matches obscure strings to a single 'master string' for further processing. I have a lot of data that looks something like this: Basketball Basket Ball Football BasketBallR BBall BBall - r FootB ...and so on. These need to be mapped to a master record like so: Basketball = Basket Ball, BBall Basketball - R = BasketBallR, BBall - r I also have instances of data resembling this format: Football -r FootBall - r-g/H,Q,HH These situations need to be separated into different categories before being mapped. For example FootBall - r-g/H,Q,HH should be: Football - r Football - g Football - H Football - Q Football - HH At this point, it still needs to be mapped to a master record... I've tried several different combinations of fuzzywuzzy matching methods, Levenshtein Distance measurements, regex, etc. and can't seem to find a reliable method to logically associate different naming styles of a single item with a master name. I'm throwing my hands up in desperation. Are there any existing python resources than can help sort out my problem? Are there other options? Can anybody point out an obvious option that I might have overlooked? Basically, any suggestion, solution, resource or alternative method is greatly appreciated.

    Read the article

  • Custom Transport Agent: How do I collect NDRs and all other undeliverables in Exchange 2010 from the Postmaster?

    - by makerofthings7
    I'm trying to collect all NDRs in a single mailbox for all invalid recipients, and anything that fails for any reason. I have a custom transport agent, that I've written myself that appears here: [PS] C:\Windows\system32>Get-TransportAgent Identity Enabled Priority -------- ------- -------- Transport Rule Agent True 1 Text Messaging Routing Agent True 2 Text Messaging Delivery Agent True 3 Routing Rule Agent True 4 **** Sometimes when I run get-messagetrackinglog I get failures like this below RunspaceId : 4ecc61fb-13b9-4506-b680-577222c9bf21 Timestamp : 10/14/2013 12:42:42 PM ClientIp : ClientHostname : Exchange1 ServerIp : ServerHostname : SourceContext : Routing Rule Agent ConnectorId : Source : AGENT EventId : FAIL InternalMessageId : 4416 MessageId : <[email protected]> Recipients : {[email protected]} RecipientStatus : {} TotalBytes : 4542 RecipientCount : 1 RelatedRecipientAddress : Reference : MessageSubject : review CGRC due diligence. Sender : [email protected] ReturnPath : [email protected] MessageInfo : MessageLatency : MessageLatencyType : None EventData : How can I collect the NDRs in a single mailbox for review? I have already set the following command but it is of no effect [PS] C:\>set-TransportConfig -JournalingReportNdrTo [email protected] -ExternalPostmasterAddress [email protected]

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >