Search Results

Search found 3168 results on 127 pages for 'grand central dispatch'.

Page 83/127 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • RODC password replication and A/D sites and subnets

    - by Gregory Thomson
    I work at a school district with about 30 school sites. Windows 2008 A/D setup - all central at the district office. In A/D, all is under one site, and no subnets defined. One A/D forest and only one domain under that. We're now looking to start putting RODCs at the schools to put the authentication and DNS out there closer to them. I haven't worked with A/D sites and subnets, and only a little with RODC password replication. But just got an invite to a meeting to talk about this tomorrow... If we start breaking down the A/D pieces into sites/subnets, can we also use that as a way to help apply an RODC password replication policy in a way that matches so that only each school sites' users passwords are replicated/cached on their RODC?

    Read the article

  • Cheapest Embedded System with Wireless Connectivity?

    - by geeko
    Problem: I'm trying to capture some information coming from keyboard, mouse and barcode reader connected to some PC via PS/2, USB and/or RS-232, before information get to PC and send it over the Internet to some central server. I'm thinking to do so by using some kind of hardware interface (middleware, if you like) between PC and input devices. I thought this interface can be embedded PC, PDA or simply some mobile phone with wireless connectivity. PS/2 and RS-232 could be converted to USB using some USB convector/hub that connects to one of these interface systems. Then some special API programming take place to communicate between PC, input devices and wireless server, in the form of application running on the interface system. What's the cheapest solution that can achieve this? Or possibly any other solution?

    Read the article

  • Web-based (intranet / non-hosted) timesheet / project tracking tools

    - by warren
    I realize some similar questions have been asked along these lines before, but from reading-through them today, it appears they don't match my use case. I am looking for a web-based, non-hosted time and project tracking tool. I've downloaded Collabtive and Achievo so far, but am looking for other suggestions, too. My list of requirements: runs on standard LAMP stack non-hosted (ie, there is an option to download and run it on a local server) not a desktop/single-user application easy-to-use - my audience is a mix of technical and non-technical folks easy to maintain - when time for upgrading comes, I'd really like to not have to rebuild the app (a la ./configure ; make ; make install) needs to support multiple users free-form project additions: we don't have a central project management authority (users should be able to add whatever they're working on, not merely from a drop-down) Does anyone here have experience with such tools? It doesn't have to be free.. but free is always nice :)

    Read the article

  • Rewriting to address on postfix local aliases

    - by Wade Williams
    I was running into the common problem that mail for "root" on my system was having $mydomain appended, and because $mydomain is not in $mydestination, the mail was being sent to our central mail server as "root@domain." I cannot add $mydomain to $mydestination, because if I understand it correctly, that would mean all mail addressed to $mydomain would be looked up locally, and if an alias does not exist, delivery would fail. So, I followed these instructions: Delivering some, but not all accounts locally which seems to have resolved the problem. Mail for "root" is now expanded according to /etc/aliases and delivered to the non-local address I desire. The one oddity however is that the "To:" address still reads "root@domain." Is there any way I can get the "To" address to be the one that the alias directed its delivery? So for example, if my alias says that mail for "root" should go to "hostname-admin@domain" is there any way the "To" address can be rewritten as "hostname-admin@domain?" Currently it still shows as "root@domain."

    Read the article

  • MySQL on a laptop for remote workers - MyISAM keeps corrupting

    - by Jonathon
    We have an application that is used by remote, mobile workers. It intalls WAMP (Server2Go) on a laptop and uses MySQL to store data locally. All tables are MyISAM. Once a day, the workers sync the database to our central server via HTTP scripts that query the data and post it to our site. The problem is that many of these laptop database tables are corrupting continually. It appears that MySQL acts like it saves the information (I don't get any query errors), but the table gets corrupt. I have to repair the table constantly (which removes several rows of data in the process). Does anyone have any ideas about how to work around this problem? Would it be wise to switch to InnoDB on the laptops? How about a different database system altogether. I have looked at MySQL Embedded, but it appears to be the same engine as the regular MySQL.

    Read the article

  • How to stop syslog from listening to 514 on CentOS 5.8

    - by Jim
    I have a CentOS 5.8 machine (with regular syslog) that for some reason is listening to port 514, even though it is not started with "-r" (to receive remote syslog messages). # netstat -tulpn | grep 514 udp 0 0 0.0.0.0:514 0.0.0.0:* 2698/syslogd Syslog is started with only "-m 0": ps -ef | grep syslogd root 2698 1 0 15:55 ? 00:00:00 syslogd -m 0 I have tried starting it with "-m 0 -r", just to check if there was any difference, but there is not. This machine is a client and should only log to a central log server - it should not be listening itself. What am I missing?

    Read the article

  • How to calculate unweighted averages in Excel PivotTable?

    - by yonatron
    I often make PivotTables in which each row contains a number of per-person average measures. I then want to look at the unweighted column average for each measure, and usually make some kind of chart from these. Because my individual cells are often averaged from different numbers of data points, the Grand Total row ends up being a weighted average, which I’m not interested in. So I usually make my own average row a few rows above the table to use for my charts. That’s not too much work, but there’s another problem. I often add a few more people’s worth of data to the PivotTables’ source, then refresh the tables. This means my average row needs to be updated to encompass more rows from the PivotTable. Not a huge deal with one table, but when I have lots of them across lots of sheets, I have to do find/replace on a whole bunch of formulas. So: is there a way to automatically get unweighted column averages in a PivotTable, such that when the table is refreshed, the averages don’t change locations and encompass the newly added (or removed) data Thanks

    Read the article

  • design a large scale network for an organization

    - by Essam
    i want to design a large scale network for an organization with HQ and two branches. i want to use a class A subnet. if i am using the network address 30.0.0.0 for the whole organization how can it be different from another organization company or whatever which is using the same address in another country? now i have the three locations for this organization,so i need 5 subnets [one for the HQ,two for branch A and branch B , one for connecting A to HQ and one for connecting branch B with HQ since i will use central DHCP server at the HQ,is that (number of subnetting) right? is it advisable to use class A or class B for this organization it term of address that will be wasted (let's say it is a university with two branches in two different states)?!

    Read the article

  • Install latest kernel to Ubuntu and have grub acknowledge the new installed kernel

    - by Phuong Nguyen
    I have an installed Ubuntu Distro (Karmic 9.10) already. However, due to some problems with xorg ati driver, I cannot standby my computer. Some guy have suggested me to try the latest version of xorg driver which in turn requires a newer version of Linux kernel than the newest release available from Ubuntu Central Repository (2.6.33). I have searched though several articles on how to install a custom Linux kernel. However, these articles are so 2004/2005 and they were talking about lilo (???). Since then, I'm afraid that I cannot make the Grub Boot recognize the new Linux kernel properly (I'm just a newbie to Linux). I would love to know how to install the kernel into Ubuntu and have grub acknowledge the new installed kernel.

    Read the article

  • Ubuntu how to FTP transfer files to folder /var/www?

    - by jc.yin
    I'm new to linux and I've set up a web server with Ubuntu Desktop edition so I can practice with the GUI a bit before transitioning to Ubuntu Server. I've already set up a LAMP stack as well as FTP. Now I just need to know how to transfer my web files to the /var/www folder in Ubuntu. Previously I've worked on Mac OS and there's a central server for all the web files where I can FTP to. Now after I've managed to connect via FTP to the Ubuntu server, I see all the folders such as Desktop, Downloads, Documents etc but no web folder. Anyone able to help me understand how do I FTP to the /var/www folder in Ubuntu? Thanks

    Read the article

  • Excel graph: turn stacked bar chart into part bullet chart

    - by Mike
    I've a simple data file that has one column of actuals and another of target against categories. I would like to turn the TARGET figure into a 'Bullet marker'. I've seen it done on other graphs but I'm struggling with the category column being overwritten with the xy axis values. Or if I get close to doing it then the xy markers are not central. I've checked out Peltier but his examples are based on even more comlicated data than mine, so the steps required didn't seem to match up. Help greatly appreciated. Thanks Mike. Example Data: Cat Actual target A 10 15 B 10 12 C 20 17

    Read the article

  • Load Balancing a UDP server

    - by Hellfrost
    Hello StackOverflow, I have a udp server, it is a central part in my business process. in order to handle the loads I'm expecting in the production environment I'll probably need 2 or 3 instances of the server. The server is almost entirely stateless, it mostly collects data, and the layer above it knows how to handle the minimal amount of stale data that can arise from the the multiple server instances. My question is, how can I implement load balancing between the servers? I would prefer to distribute the requests as evenly as possible between the servers. I would also would like to have some fidelity, I mean if client X was routed to server y, then I want all of X's subsequent requests to go to server Y, as long as it is sensible and not overloads Y. By the way it is a .NET system... what would you recommend?

    Read the article

  • How to properly store dotfiles in a centralized git repository

    - by asmeurer
    I'd like to put all my dotfiles (like .profile, .gitconfig, etc.) in a central git repository, so I can more easily keep track of the changes. I did this, but I would like to know how to properly handle keeping them in sync with the actual ones in ~/. I thought that you could just hard link the two using ln, but this does not seem to work as I expected, i.e., if I edit one file, the other does not change. Maybe I misused the ln command, or else I misunderstand how hard links work. How do people usually do this? Judging by GitHub, it's a pretty popular thing to do, so surely there's a seamless way to do it that someone has come up with. By the way, I'm on Mac OS X 10.6.

    Read the article

  • HP-UX -> Linux incremental remote backup

    - by stack_zen
    Hi. I've the need to setup a differential backup process from a range of remote HP-UXes to a central RHEL5 server. I'd happily go with rsync, problem is, stock HP-UX 11.11 has no built-in rsync and I don't have permissions to install any software on the remote stock HP-UXes. How should I approach this? HP-UX provides: fbackup (HP-UX exclusive) cpio (available in RHEL5, allows backing up only the files which changed, but always grabs the totality of the file) ssh remote_user@remote_host 'find /u01/engine/logs/ -type f -name "*.log" | cpio -o | gzip -' | cpio gunzip - | -idmv Those solutions don't really answer my incremental (bandwidth efficiency) problem do they?

    Read the article

  • WSS_Content contains?

    - by Mike
    WSS 3.0:Windows2003 I have a content database that keeps growing for the name of simply, "WSS_Content" This database is aside from all the other content databases that are linked to an web application, but located in the same directory. I count 5 CONTENT databases on this directory, but only 4 web applications (excluding the centraladmin). Trouble is it keeps growing in size and I need to know what it is and why its growing. Is this a default database of some kind? Where and why would it grow? I recently found, through Central Administration, that one of my sites has a content database name of "WSS_Content_(random numbers and letters)" whereas, the other content databases would have a name like "WSS_Content_(WebApplicationName)" What gives?

    Read the article

  • Likeliness of obtaining same IP address after restarting a router

    - by ?affael
    My actual objective is to simulate logged IPs of web-site users who are all assumed to use dynamically assigned IPs. There will be two kinds of users: good users who only change IP when the ISP assignes a new one bad users who will restart their router to obtain a new IP So what I would like to understand is what assignment mechanics are usually at work here deciding from what pool of IPs one is chosen and whether the probability is uniformly distributed. I know there is no definite and global answer as this process can be adjusted be the ISP but maybe there is something like a technological frame and common process that allows some plausible assumptions. UPDATE: A bad user will restart the router as often as possible if necessary. So here the central question is how many IP changes on average are necessary to end up with a previously used IP.

    Read the article

  • Mangling traffic from a Mikrotik Router

    - by TiernanO
    I have a MikroTik powered Router in the house with a couple of internet connections (2 200/10Mb Cable modems and a 100/20Mb VDSL Line). I am using Mangle rules to set routing marks and NAT rules to do some load balancing, and everything seems to be going grand... But it only works for traffic from outside the router... Let me explain: I have 4 GigE ports on the machine, WAN1,2 and 3, and a LAN port named LAN1. All traffic from LAN1 is getting mangled (as it should be) but traffic from the load router itself (proxy traffic, IPv6 tunnels, VPN connections) are not being mangled. They get the first route to 0.0.0.0/0, which in my case is WAN2, and stick with it. So, how do I get traffic from the local router to be mangled? Originally it was proxy traffic that caused the problem, but now with IPv6 and VPN, they are more important to be mangled... last time i enabled IPv6 traffic, all traffic only went though WAN2, and the rest where unused... Any ideas?

    Read the article

  • Experience with MQ File Transfer Edition?

    - by mfinni
    We've got several processes that move files across servers - SFTP, FTP, SCP; Windows, Linux, AIX; there is a workflow component (usually require a control file with filenames and hash values to move a batch of related files). The action is often initiated on our servers to get the files, so we need to make sure they're done being written. We have some homegrown scripts to do this, but they don't always work properly, and troubleshooting, maintenance, and log review is not easy this way. There's a lot of servers, and our scripts don't have central logging or a dashboard/console/etc. We're looking into commercial products to do this. Has anyone used MQ File Transfer Edition? Another team in our company is using Aspera, does anyone have any thoughts on that, or other favored products? I have no idea what our budget is for this, yet. Just trying to get a handle on the product space from the perspective of other admins.

    Read the article

  • Web Folder size/quota reporting tool?

    - by nctrnl
    I am currently using a Visual Basic script to determine how big the web folders are and what quota is decided for each folder. The quota is in no way a physical limit, just a value inserted by me to decide whether a user is using too much space or not. The script does the job quite neatly and sends an html file by mail on a regular basis. The problem is that it's such a hassle to insert new quotas since I have to fiddle around with the code. A central "control panel" with an overview and ability to insert new quotas would be more suitable. Is there any software that can do the following: Scan specified folder/subfolders Report the file size and present it in some sort of interface (could be a php/mysql solution) Ability to specify a quota and see the difference value ? It is really important that the quota handling is made simple so that some non-technician can handle this.

    Read the article

  • Web Folder size/quota reporting tool?

    - by nctrnl
    I am currently using a Visual Basic script to determine how big the web folders are and what quota is decided for each folder. The quota is in no way a physical limit, just a value inserted by me to decide whether a user is using too much space or not. The script does the job quite neatly and sends an html file by mail on a regular basis. The problem is that it's such a hassle to insert new quotas since I have to fiddle around with the code. A central "control panel" with an overview and ability to insert new quotas would be more suitable. Is there any software that can do the following: Scan specified folder/subfolders Report the file size and present it in some sort of interface (could be a php/mysql solution) Ability to specify a quota and see the difference value ? It is really important that the quota handling is made simple so that some non-technician can handle this.

    Read the article

  • Office jukebox systems

    - by Jona
    We're looking for a good office jukebox solution where staff can select songs via a web interface to be played over the central set of speakers. Must haves: Web Interface RSS / easy to scrap display of currently playing songs Ability to play mp3s and manage an ordered playlist. Good cataloguing of media. Multiple OSs supported as clients - Windows, Mac, Fedora Linux (will probably be accomplished by virtue of a web interface). We have tried XBMC which worked well as a proof of concept however the web interface is just too immature and has too many bugs for a reliable multi-user solution. I believe the same will be true of boxee. Nice to have: Ability to play music videos onto a monitor Ability to listen to radio streams specifically Shoutcast and the BBC. Ability to run on Linux is a nice to have but windows solutions which worked well would certainly be considered. I am aware of question 61404 and don't believe this to be a duplicate due to the specific requirements.

    Read the article

  • What permissions are needed to do an LDAP bind to an Active Directory Server

    - by DrStalker
    What permissions are needed to perform an LDAP bind to an active directory server? I have a central domain (call it MAIN) that has two-way trusts to domains in other forests (call then REMOTE and FARAWAY) Using MAIN\myaccount as the username and my password I can bind to REMOTE fine, but not to FARAWAY; I get an invalid credentials response 80090308: LdapErr: DSID-0C09030B, comment: AcceptSecurityContext error, data 525, v893 In all other ways the trusts seem to work fine. What permissions do I need to check to figure out why the bind is failing? My understanding is that anyone in AUTHENTICATED USERS should be able to bind to LDAP, but that only seems to hold true for some domaians and not others.

    Read the article

  • Limit number of simultaneous connections squid makes to a single server

    - by Ben Voigt
    Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download. How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time, delaying (but not dropping) the others.

    Read the article

  • Localized database for customers

    - by Jim
    The company I work for, has just moved to AWS - and currently they have one very large central database with the instance currently located in America. However, one of their clients has requested that all of their data is held in the EU. So creating an AWS instance in Ireland isn't a problem, the problem is the database and how to manage it. We were considering having another database that runs in the EU for European customers, and use a different primary key step, so that the primary keys will never conflict in case the two locations need to be merged in the future. The problem is, if we have a customer that uses our system in both America, and the EU we would have to create 2 accounts for that user, and reporting across both regions would not be possible as the connection time would be too high. Is there an alternative to set this up?

    Read the article

  • IT Inventory Tracking

    - by DrStalker
    What is a good tool to keep track of IT inventory? Systems that are installed and running, parts being ordered, that sort of thing. I'd love a central, web based system (preferably something we can customize) but my searching so far has resulted in a lot of dead open source projects that havn't been updated in a few years and poorly created commercial websites that don't do a very good job describing their product. The software doesn't have to be free or open source - a good commercial alternative is fine. It doesn't even need to be a web-based tool, that's just what I thought would be simplist to find and easiest to deploy. The number of assets that it will be tracking will be in the dozens, so it doesn't have to be a super high-end enterprise solution but it does need to do a better job than an excel sheet in a shared folder (which is our current "solution")

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >