Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 354/677 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Best practice for ONLY allowing MySQL access to a server?

    - by Calvin Froedge
    Here's the use case: I have a SaaS system that was built (dev environment) on a single box. I've moved everything to a cloud environment running Ubuntu 10.10. One server runs the application, the other runs the database. The basic idea is that the server that runs the database should only be accessible by the application and the administrator's machine, who both have correct RSA keys. My question: Would it be better practice to use a firewall to block access to ALL ports except MySQL, or skip firewall / iptables and just disable all other services / ports completely? Furthermore, should I run MySQL on a non-standard port? This database will hold quite sensitive information and I want to make sure I'm doing everything possible to properly safeguard it. Thanks in advance. I've been reading here for a while but this is the first question that I've asked. I'll try to answer some as well = )

    Read the article

  • Why do we still have to use drive letters to identify file systems?

    - by Charles E. Grant
    A friend has run into a problem where they installed Windows 7 from an external drive, and the internal boot drive is now assigned to H:. Theoretically this shouldn't cause problems because there are programming interfaces for getting the drive letter for the system drive. In practice though, there are quite a few programs that assume that C: is the only possible location for the system directories, and they refuse to run with the system directories on H:. That's not Microsoft's fault, but it's a pain none-the-less. The general consensus seems to be that a re-install, setting the internal boot drive to C:, is the only way to avoid fix these problems. UNIX-like systems display all file systems in a single unified directory tree and mostly seem to avoid problems like this. Is it possible to configure a Windows system without reference to drive letters, or does the importance of backwards compatibility mean that Windows will be working with drive letters from now until doomsday?

    Read the article

  • One vs. many domain user accounts in a server farm

    - by mjustin
    We are in a migration process of a group of related computers (Intranet servers, SQL, application servers of one application) to a new domain. In the past we used one domain user account for every computer (web1, web2, appserver1, appserver2, sql1, sqlbackup ...) to access central Windows resources like network shares. Every computer also has a local user account with the same name. I am not sure if this is necessary, or if it would be easier to configure and maintain to use one domain user account. Are there key advantages / disadvantages of having one single user account vs. dedicated accounts per computer for this group of background servers? If I am not wrong, one advantage besides easier administration of the user account could be that moving installed applications and services around between the computers does not require a check of the access rights anymore. (Except where IP addresses or ports are used)

    Read the article

  • Full Access user removed from NTFS Share

    - by TJ
    I don't know how it happened but for some reason one of the sub folders in the Network shares (call the share Market and the sub folder Support) no longer has any groups or users with full permissions on the share. The Market top level has users and groups with these permissions and everything is set up for folder inheritance but it's not inheriting permissions from the top level and only has modify permissions for the single group that is in the Access List for the sub folder Support. I can see items in the sub folder but I can not add, edit, or delete permissions to the Support folder. What are my options so I can once again manage permissions?

    Read the article

  • How to plan/manage multi-platform (mobile) products?

    - by PhD
    Say I've to develop an app that runs on iOS, Android and Windows 8 Mobile. Now all three platforms are technically in different program languages. The only 'reuse' that I can see is that of the boxes-and-lines drawings (UML :) charts and nothing else. So how do companies/programmers manage the variation of the same product across different platforms especially since the implementation languages differ? It's 'easier' in the desktop world IMO given the plethora of languages and cross-platform libraries to make your life easier. Not so in the mobile world. More so, product line management principles don't seem to be all that applicable - what is same and variant doesn't really matter - the application is the same (conceptually) and the implementation is variant. Some difficulties that come to mind: Bug Fixing: Applications maybe designed in a similar manner but the bug identification and fixing would be radically different. A bug on iOS may/may-not be existent for that on Android. Or a bug fix approach on one platform may not be the same on another (unless it's a semantic bug like a!=b instead of a==b which would require the same 'approach' to fixing in essence Enhancements: Making a change on one platform would be radically different than on another Code-Design Divergence: They way the code is written/organized, the class structures etc., could be very different given the different implementation environments - leading to further reuse of the (above) UML models. There are of course many others - just keeping the development in sync and making sure all applications are up to the same version with the same set of features etc. Seems the effort is 3x that of a single application. So how exactly does one manage this nightmarish situation? Some thoughts: Split application to client/server to minimize the effect to client side only (not always doable) Use frameworks like Unity-3D that could take care of the cross-platform problem (mostly applicable to games and probably not to other applications etc.) Any other ways of managing a platform line? What are some proven approaches to managing/taming the effects?

    Read the article

  • Why are my downloads up to ~1500KByte/sec only, when the ADSL connection locks at 13611Kbit/s?

    - by leladax
    No uploading is going on other than the overhead of downloading which appears to be not high for the abilities of the connection: Only about 30-40KByte/s when the router locks at 1012Kb/s and other direct uploads or uploading overheads can reach more than 100KByte/sec so I don't think it's a congestion at uploading that is doing it. Is there something I miss? Because I assume 13611Kbit/s should be ~1701Kbyte/sec. Is it an overheard at the ADSL level I don't understand? Could it be the ISP doing it? If it's active throttling it can't be on single connections since 2 high speed connections still go up to ~1500KByte/sec. It's not an example on torrents or other complex situations. The tests were on Ethernet, but I doubt the results would be different on wireless. I wonder if the settings of those connections at my end could be doing it, e.g. MTU settings, though I haven't touched the defaults of a common Realtek NIC.

    Read the article

  • AdSense (reports) and custom channels

    - by RobbertT
    Please help me to further understand custom channels. As Google says it is a way to map your ads, but I still have a few questions: Is it correct that a single custom channel per 1 ad is not very useful, since you can specify Ad blocks in the AdSense reports? I have multiple Ads in multiple custom channels. After this I created 1 custom channel and added all the ads to it. I made this channel targetable, so people can target through this channel on all ads at once. Is this a good way to do it? In other words, is it possible to have ads in multiple custom channels (without targeting, just for analyzing) and then create 1 custom channel with targeting that embraces all the (desired) ads? Why is it not possible for me to analyze custom channels (or ad blocks & formats) per site in the Adsense (reports). Or am I doing something wrong? If not, I have to create different custom channels per site to see how certain ads are doing on a site level?

    Read the article

  • PASS: Total Registrations

    - by Bill Graziano
    At the Summit you’ll see PASS announce the total attendance and the “total registrations”.  The total registrations is the sum of the conference attendees and the pre-conference registrations.  A single person can be counted three times (conference plus two pre-cons) in the total registration count. When I was doing marketing for the Summit this drove me nuts.  I couldn’t figure out why anyone would use total registrations.  However, when I tried to stop reporting this number I got lots of pushback.  Apparently this is how conferences compare themselves to each other.  Vendors, sponsors and Microsoft all wanted to know our total registration number.  I was even asked why we weren’t doing more “things” that people could register for so that our number would be even larger.  This drove me nuts. I understand that many of you are very detail oriented.  I just want to make sure you understand what numbers you’re seeing when we include them in the keynote at the Summit.

    Read the article

  • Cascading KVM switches

    - by einpoklum
    I have a not-so-small number of computers (say, 5) which I want to access with a single keyboard, USB and monitor. I can get an 8-port KVM switch, which is a pretty expensive piece of hardware; however, in theory, I should be able to cascade KVM switches: Have one 4-port KVM switch between 2 other KVMs (a 2-port and a 4-port). Is this doable (with typical off-the-shelf switches and cables)? Has anyone had experience doing this? Note: I'm interested in USB-only for the keyboard and mouse, and either VGA or DVI for the display. Audio and PS/2 connections are irrelevant for me..

    Read the article

  • Limited access to Amazon S3 buckets

    - by Tomas Markauskas
    Is it possible to somehow limit the access to an Amazon S3 account. I don't really like the idea of distributing my secret access key to all of my applications, that want to access just a single bucket on my account. If someone gains access to one of the applications, I could loose all my data stored on S3. One way I was thinking to do it would be creating a second S3 account and give it access to just one bucket of the main account, but it's not really a great solution. Another nice thing for me would be to give the secondary account only write (but not modify/delete) and read access. That way I could upload backups or other files and be sure, that they won't get lost.

    Read the article

  • Windows 7: enabling navigation of subfolders in pinned Start Menu folders

    - by AspNyc
    I'm just about to move from Windows XP to Windows 7, and I'm struggling with some of the interface changes. In XP, I was able to throw a folder intoC:\Documents and Settings\username\Start Menuand have it appear on the Start Menu, complete with the ability to navigate through subfolders. I've figured out how to pin a folder onto the Start Menu in Windows 7, which required a registry hack. However, I am unable to view the subfolders of the pinned folder without opening a new Windows Explorer window. Is there any way to replicate the old XP behavior I'm used to? I'd like to be only a single click away from these handful of application links and folders, since I use them all the time throughout the day.

    Read the article

  • Packaging MATLAB (or, more generally, a large binary, proprietary piece of software)

    - by nfirvine
    I'm trying to package MATLAB for internal distribution, but this could apply to any piece of software with the same architecture. In fact, I'm packaging multiple releases of MATLAB to be installed concurrently. Key things Very large installation size (~4 GB) Composed of a core, and several plugins (toolboxes) Initially, I created a single "source" package (matlab2011b) that builds several .debs (mainly matlab2011b-core and matlab2011b-toolbox-* for each toolbox). The control file is just the standard all: dh $@ There is no Makefile; only copying files. I use a number of debian/*.install files to specify files to copy from a copy of an installation to /usr/lib/. The problem is, every time I build the thing (say, to make a correction to the core package), it recopies every file listed in the *.install file to e.g debian/$packagename/usr/ (the build phase), and then has to bundle that into a .deb file. It takes a long time, on the order of hours, and is doing a lot of extra work. So my questions are: Can you make dh_install do a hardlink copy (like cp -l) to save time? (AFAICT from the man page, no.) Maybe I should just get it to do this in the Makefile? (That's gonna b e big Makefile.) Can you make debuild only rebuild .debs that need rebuilding? Or specify which .debs to rebuild? Is my approach completely stupid? Should I break each of the toolboxes into its own source package too? (I'll have to do some silly templating or something, because there's hundreds of them. :/)

    Read the article

  • Using a random string to authenticate HMAC?

    - by mrwooster
    I am designing a simple webservice and want to use HMAC for authentication to the service. For the purpose of this question we have: a web service at example.com a secret key shared between a user and the server [K] a consumer ID which is known to the user and the server (but is not necessarily secret) [D] a message which we wish to send to the server [M] The standard HMAC implementation would involve using the secret key [K] and the message [M] to create the hash [H], but I am running into issues with this. The message [M] can be quite long and tends to be read from a file. I have found its very difficult to produce a correct hash consistently across multiple operating systems and programming languages because of hidden characters which make it into various file formats. This is of course bad implementation on the client side (100%), but I would like this webservice to be easily accessible and not have trouble with different file formats. I was thinking of an alternative, which would allow the use a short (5-10 char) random string [R] rather than the message for autentication, e.g. H = HMAC(K,R) The user then passes the random string to the server and the server checks the HMAC server side (using random string + shared secret). As far as I can see, this produces the following issues: There is no message integrity - this is ok message integrity is not important for this service A user could re-use the hash with a different message - I can see 2 ways around this Combine the random string with a timestamp so the hash is only valid for a set period of time Only allow each random string to be used once Since the client is in control of the random string, it is easier to look for collisions I should point out that the principle reason for authentication is to implement rate limiting on the API service. There is zero need for message integrity, and its not a big deal if someone can forge a single request (but it is if they can forge a very large number very quickly). I know that the correct answer is to make sure the message [M] is the same on all platforms/languages before hashing it. But, taking that out of the equation, is the above proposal an acceptable 2nd best?

    Read the article

  • solaris + EMC + power-path

    - by yael
    please advice - when I run powercf command on my Solaris machine , which changes this command do on the EMC storage , or on Solaris file system ? from maanual page: DESCRIPTION During system boot on Solaris hosts, the powercf utility configures PowerPath devices by scanning the HBAs for both single-ported and multiported storage system logical dev- ices. (A multiported logical device shows up on two or more HBAs with the same storage system subsystem/device identity. The identity comes from the serial number for the logical device.) For each storage system logical device found in the scan of the HBAs, powercf creates a corresponding emcpower device entry in the emcp.conf file, and it saves a primary path and an alternate primary path to that device.

    Read the article

  • Recursively apply ACL permissions on Mac OS X (Server)?

    - by mralexgray
    For years I've used the strong-armed-duo of these two suckers... sudo chmod +a "localadmin allow read,write,append,execute,\ delete,readattr,writeattr,readextattr,writeextattr,\ readsecurity,writesecurity,chown" sudo chmod +a "localadmin allow list,search,add_file,add_subdirectory,\ delete_child,readattr,writeattr,readextattr,\ writeextattr,readsecurity,writesecurity,chown" to, for what I figured was a recursive, and all-encompassing, whole-volume-go-ahead for each and every privilege available (for a user, localadmin). Nice when I, localadmin, want to "do something" without a lot of whining about permissions, etc. The beauty is, this method obviates the necessity to change ownership / group membership, or executable bit on anything. But is it recursive? I am beginning to think, it's not. If so, how do I do THAT? And how can one check something like this? Adding this single-user to the ACL doesn't show up in the Finder, so… Alright, cheers.

    Read the article

  • Algorithms for Data Redundancy and Failover for distributed storage system?

    - by kennetham
    I'm building a distributed storage system that works with different storage sizes. For instance, my storage devices have sizes of 50GB, 70GB, 150GB, 250GB, 1000GB, 5 storage systems in one system. My application will store any files to the storage system. Question: How can I build a distributed storage with the idea of data redundancy and fail-over to store documents, videos, any type of files at the same time ensuring that should one of any storage devices fail, there would be another copy of these files on another storage device. However, the concern is, 50GB of storage can only store this maximum number of files as compared to 70GB, 150GB etc. With one storage in mind, bringing 5 storage systems like a cloud storage, is there any logical way to distribute or store the files through my application? How do I ensure data redundancy through different storage sizes? Is there any algorithm to collate multiple blob files into a single file archive? What is the best solution for one cloud storage with multiple different storage sizes? I open this topic with the objective of discussing the best way to implement this idea, assuming simplicity, what are the issues of this implementation, performance measurements and discussion of the limitations.

    Read the article

  • How to disabled password authentication for specific users in SSHD

    - by Nick
    I have read several posts regarding restricting ALL users to Key authentication ONLY, however I want to force only a single user (svn) onto Key auth only, the rest can be key or password. I read How to disable password authentication for every users except several, however it seems the "match user" part of sshd_config is part of openssh-5.1. I am running CentOS 5.6 and only have OpenSSH 4.3. I have the following repos available at the moment. $ yum repolist Loaded plugins: fastestmirror repo id repo name status base CentOS-5 - Base enabled: 3,535 epel Extra Packages for Enterprise Linux 5 - x86_64 enabled: 6,510 extras CentOS-5 - Extras enabled: 299 ius IUS Community Packages for Enterprise Linux 5 - x86_64 enabled: 218 rpmforge RHEL 5 - RPMforge.net - dag enabled: 10,636 updates CentOS-5 - Updates enabled: 720 repolist: 21,918 I mainly use epel, rpmforge is used to the latest version (1.6) of subversion. Is there any way to achieve this with my current setup? I don't want to restrict the server to keys only because if I lose my key I lose my server ;-)

    Read the article

  • How to correctly handle redirect after site facelift

    - by Stefan
    I recently updated our site taking it from a multi-page site to a single page site. The problem now is that when the site is searched in say Google, it displays the site as well as the indexed pages. So if a user clicks say our "About" page, it takes them to our now outdated material. I am hoping to get some guidance on how to properly handle this. I figure the first step is to now setup a robots.txt on our new index page to tell the engines not to crawl beyond index.php. But in the meantime, how do I handle the fact that when searching our site on Google we may still have users who try to click on sub-page links? Should I simply setup redirects while waiting for the engines to update? And if so, do I need to setup redirects on each page using PHP or is this something I would take care of on our sites control panel? I am not very familiar with redirects... Any help is appreciated!

    Read the article

  • Extract Certs from Apache

    - by user271619
    Recently I've had to uninstall a single Self-Signed SSL Certificate from one of my Apache boxes, specifically for an outside party. That's not really a problem for me, since it was easy. What confuses me is how they knew I had a self-signed certificate. The domain I provided them was not related to the domain with the self-signed certificate. Does this mean Apache publicizes the Virtual hosts in the httpd.conf file? I asked the outside party what software they used to extract information from my server, and they provided this GitHub link: https://gist.github.com/4ndrej/4547029 I figured I'd ask the community first, before I attempt installing the Java program.

    Read the article

  • Is it effective installing firewall within same machine which offering service?

    - by Eonil
    I'm a starting a small service practically. And I have single server currently. No money to purchase separated/dedicated firewall equipment now. Is it effective installing firewall software on same machine which offering internet service? My server will offer HTTP, NFS, and SSH, and custom made server software on a several ports. (edit) All services (except NFS) should be open to internet. Not internal services. I guess my machine (virtualized within Xen) is connected to the internet directly because I can connect to my machine SSH with only IP address. (edit) NFS is not open to internet. Sorry for my mistake. NFS will be served via SSH only.

    Read the article

  • Auto login CISCO VPN client on linux [closed]

    - by user70704
    Hi, I have installed Cisco vpn client on my linux system (Fedora core 8). After login, every time, i need to run the following command VPNC to connect the VPN server. VPNC command request the input data from the user, IPSec gateway : IPSec ID: IPSec Secret: Username: Password: So, my requirement is, can i connect the VPN server through any single command?. I feel so lazy to enter the above requirements at every time. I want to connect the VPN Server on boot startup. I was try using expect script, but i can't. Thanks in advance.

    Read the article

  • excel - merge cells including a zip code

    - by evanmcd
    Hi all, I have the need to merge a bunch of cells that comprise an address (street, city, state, zip) into a single cell. No problem except with the zip code. The zip cell has only 4 digits for any zip that starts with 0. So, I change it's format to be Special - Zip Code. That makes the cell itself show the beginning 0, but the merged cell still does not show the leading 0. Does anyone know how to get the leading 0 in the merged column? Thanks Evan

    Read the article

  • Moviebarcodes Showcases Entire Movies as Frame-based Barcodes

    - by Jason Fitzpatrick
    If you’ve ever wanted a chance to look at at an entire movie in a single glance, here’s your chance. Moviebarcodes shares mock-barcodes generated by turning each frame of a movie into a thin stripe, offering a glimpse into the color choices and shot lengths in popular movies. The barcode seen above was generated from The Matrix; you can see where the green indicates scenes that were shot inside the matrix and thus given a subtle green tint. In the barcode below, generated from the movie Pleasantville you can see the transition in the movie between the color and black and white scenes. In the case of Pleasantville, elements of the black and white world turning to color represent pivotal moments in the plot development which are now neatly mapped out below: Check out the hundreds of barcodes at the link below; you can even order prints of your favorite movies. Find a great rendering in the mix? Share a link in the comments below. Moviebarcodes [via Cool Inforgraphics] How to Create an Easy Pixel Art Avatar in Photoshop or GIMPInternet Explorer 9 Released: Here’s What You Need To KnowHTG Explains: How Does Email Work?

    Read the article

  • windows - batch moving files to another folder/directory

    - by jdamae
    I am getting an error message to the effect of unable to move files to a single file. I am not trying to do this. What I am trying to do is move files from one folder to another folder (staging) and then deleting the original folder. If you can show me a better way to do this since I am not doing this correctly. Thank you. Here is my .cmd file: Y: move "Y:\ABC_files\*.js" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC_files\*.css" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC_files\*.png" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC_files\*.htm" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC_files\*.gif" "C:\Documents and Settings\user\Desktop\ABC_Stage\ABC_files\" move "Y:\ABC.htm "C:\Documents and Settings\user\Desktop\ABC_Stage\" rmdir "Y:\ABC_files" C:\"Program Files"\"App X"\App-IDE.exe -r ABC4.run

    Read the article

  • Series On Embedded Development (Part 3) - Runtime Optionality

    - by Darryl Mocek
    What is runtime optionality? Runtime optionality means writing and packaging your code in such a way that all of the features are available at runtime, but aren't loaded and used if the feature isn't used. The code is separate, and you can even remove the code to save persistent storage if you know the feature will not be used. In native programming terms, it's splitting your application into separate shared libraries so you only have to load what you're using, which means it only impacts volatile memory when enabled at runtime. All the functionality is there, but if it's not used at runtime, it's not loaded. A good example of this in Java is JVMTI, Java's Virtual Machine Tool Interface. On smaller, embedded platforms, these libraries may not be there. If the libraries are not there, there's no effect on the runtime as long as you don't try to use the JVMTI features. There is a trade-off between size/performance and flexibility here. Putting code in separate libraries means loading that code will take longer and it will typically take up more persistent space. However, if the code is rarely used, you can save volatile memory by including it in a separate library. You can also use this method in Java by putting rarely-used code into one or more separate JAR's. Loading a JAR and parsing it takes CPU cycles and volatile memory. Putting all of your application's code into a single JAR means more processing for that JAR. Consider putting rarely-used code in a separate library/JAR.

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >