Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 317/423 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • Error getting PAM / Linux integrated with Active Directory

    - by topper
    I'm trying to add a Linux server to a network which is controlled by AD. The aim is that users of the server will be able to authenticate against the AD domain. I have Kerberos working, but NSS / PAM are more problematic. I'm trying to debug with a simple command such as the following, please see the error. Can anyone assist me to debug? root@antonyg04:~# ldapsearch -H ldap://raadc04.corp.MUNGED.com/ -x -D "cn=MUNGED,ou=Users,dc=corp,dc=MUNGED,dc=com" -W uid=MUNGED Enter LDAP Password: ldap_bind: Invalid credentials (49) additional info: 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece I have had to munge some details, but I can tell you that cn=MUNGED is my username for logging into the AD domain, and the password that I typed was the password for said domain. I don't know why it says "Invalid credentials", and the rest of the error is so cryptic, I have no idea. Is my approach somehow flawed? Is my DN obviously wrong? How can I confirm the correct DN? There was a tool online but I can't find it. NB I have no access to the AD Server for administration or configuration.

    Read the article

  • Synchronize folder on network, preserving hard links

    - by Waleed Hamra
    I have few computers using Windows XP Pro. I want to synchronize/back a folder from one machine, to another one. This far, It's a simple problem, and I've used FreeFileSync for such operations, with very satisfactory results. But, this all changes when hard links come into play. Today's folder contains lots of hard links, using such backup programs will result in hard links being treated as multiple files, and copied as such, greatly increasing folder size on destination, and defeating the purpose of using all these hard links in the first place. It gets more complicated when we take into consideration the fact that network shares on Windows DON'T expose hard linking facilities, meaning that running a hard-link-aware tool like rsync using --hard-links will be of no use. So my question, how can i backup my folder to the other computer, while preserving hard links? I don't mind installing 3rd party tools to do it, as obviously, the standard windows shares approach won't work... I am guessing there might be some tool that can be installed on both machines and works in a server/client mode? anyone has any idea how to do this?

    Read the article

  • Why does Google Analytics use two domains?

    - by AKeller
    I'm building a distributed widget that is comparable to Google Analytics. Users will add a <script> tag to their site that references my widget's JavaScript file. The Google Analytics tracking code looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXXXXX-X']); _gaq.push(['_trackPageview']); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); Can anyone explain the reasoning behind separate HTTP and HTTPS hostnames? My instinct is to just secure the www address and then use the protocol-less syntax, like //www.google-analytics.com/ga.js. But I'm sure the Google Analytics architects put a lot of thought into this approach. I'd love to understand their logic before I follow/ignore their model.

    Read the article

  • MS Word TOC that references # pages rather than page number

    - by buttonsrtoys
    We frequently need to write specifications in Word which require a TOC that refers to the total number of pages in a section, rather than the page number. E.g., Section No. Pages 01010 Summary of Work..............5 01025 Prices.......................2 01400 Quality Control..............1 01700 Contract Close Out...........2 A wrinkle is that each section is a separate file. To date, we've been writing or TOC by hand, which has introduced every error imaginable. Is there an MS feature that populates a TOC with page totals? If not, I've done a little VB in Office, so wouldn't be opposed to that route as need be, as long as it was usable by our low tech users. Related question - all the section files are in the same folder. It would be nice if the TOC loaded every file in a folder, rather than having to specify each one. Is this a feature of Word or would this require VB? We tried a master document with links to subdocuments, but since the number of section files ebbs and flows with each project, the approach required too much maintenance for our Wordophobes.

    Read the article

  • Mail-Merge on Steroids: Can Word 2003 do this?

    - by richardtallent
    I have a huge report to put together, made up of over 1,000 smaller, nearly-identical reports. Each report includes: General 1:1 information (basic mail-merge stuff) Lots of text, some of which may need to be disabled or have alternate text based on a boolean field. A few embedded images, preferably loaded via HTTP URL, but if they have to be on the a file system thing I can do that. (Filenames will be provided as a field in the data source.) Fortunately, all images are roughly the same size/shape. Several 1:m tables with a few fields apiece. The kicker is the master/child tables. I've seen examples for Word 2000 for doing this by left-joining the master and child table and using some IF/THEN logic to know whether to jump to the next master record. But in my case, I have several of these subtables, so that approach won't really work. So, can Word 2003 handle arbitrary master/child tables? If so, how? If not, I considered InfoPath, but I haven't used it before, and it seems to be made for data entry, not long formatted reports. I'm a software developer, so I could always hack something together with a massive VBA macro, or generating the report in HTML on the web server (where the data is coming from anyway). But I'm hoping Word will work without such gymnastics, since it will give the ultimate users of the report template better control over formatting and making minor changes.

    Read the article

  • NAS for Mac OS X Server

    - by SamAdmin
    I'm using Mac OS X Server and want to allow the users that connect to their network accounts to store their data on a NAS drive. I want the users to connect to the Lion server as this allows for better policies and management for me and for their afp share to be located on a NAS drive. I've looked into home directories and network logins however I don't want the users to connect into a different login environment, just an authentication against their provided account on the Lion server and for their finder to take them to their own storage area - located on the NAS drive. Currently I am using FreeNAS for both authentication and storage however there are getting to be far too many people to manage each afp share and account, plus just using FreeNAS is extremely limiting for expansion and if something goes wrong with 1 entity the entire system goes down. Using the Lion server for user accounts and policies will be much better for this expanding business. I have looked into LDAP, using the Lion server as an LDAP server to authenticate against for FreeNAS however I have had issues with this and thought a different approach could be better from the other side of the situation... Providing the account with somewhere to store data rather than the afp share authenticating against an LDAP server. I am wrong to try it this way? Is it possible to logically add storage to a Mac OS X Server which can be recognised as a local drive, so can be used for network accounts?

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • Postfix: How to configure Postfix with virtual Dovecot mailboxes?

    - by user75247
    I have configured a Postfix mail server for two domains: domain1.com and domain2.com. In my configuration domain1 has both virtual users with Maildirs and aliases to forward mail to local users (eg. root, webmaster) and some small mailing lists. It also has some virtual mappings to non-local domains. Domain2 on the other hand has only virtual alias mappings, mainly to corresponding 'users' at domain1 (eg. mails to [email protected] should be forwarded to [email protected]). My problem is that currently Postfix accepts mail even for those users that don't exist in the system. Mail to existing users and /etc/aliases works fine. Postfix documentation states that the same domain should never be specified in both mydestination and virtual_mailbox_maps, but If I specify mydestination as blank then postfix validates recipients against virtual_mailbox_maps but rejects mail for local aliases of domain1.com. /etc/postfix/main.cf: myhostname = domain1.com mydomain = domain1.com mydestinations = $myhostname, localhost.$mydomain, localhost virtual_mailbox_domains = domain1.com virtual_mailbox_maps = hash:/etc/postfix/vmailbox virtual_mailbox_base = /home/vmail/domains virtual_alias_domains = domain2.com virtual_alias_maps = hash:/etc/postfix/virtual alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases virtual_transport = dovecot /etc/postfix/virtual: domain1.com right-hand-content-does-not-matter firstname.lastname user1 [more aliases..] domain2.com right-hand-content-does-not-matter @domain2.com @domain1.com /etc/postfix/vmailbox: [email protected] user1/Maildir [email protected] user2/Maildir /etc/aliases: root: :include:/etc/postfix/aliases/root webmaster: :include:/etc/postfix/aliases/webmaster [etc..] Is this approach correct or is there some other way to configure Postfix with Dovecot (virtual) Maildirs and Postfix aliases?

    Read the article

  • How to execute a command whenever a file changes?

    - by Denilson Sá
    I want a quick and simple way to execute a command whenever a file changes. I want something very simple, something I will leave running on a terminal and close it whenever I'm finished working with that file. Currently, I'm using this: while read; do ./myfile.py ; done And then I need to go to that terminal and press Enter, whenever I save that file on my editor. What I want is something like this: while sleep_until_file_has_changed myfile.py ; do ./myfile.py ; done Or any other solution as easy as that. BTW: I'm using Vim, and I know I can add an autocommand to run something on BufWrite, but this is not the kind of solution I want now. Update: I want something simple, discardable if possible. What's more, I want something to run in a terminal because I want to see the program output (I want to see error messages). About the answers: Thanks for all your answers! All of them are very good, and each one takes a very different approach from the others. Since I need to accept only one, I'm accepting the one that I've actually used (it was simple, quick and easy-to-remember), even though I know it is not the most elegant.

    Read the article

  • Logging Remote Desktop to Servers via Logon Script or GPO or What?

    - by Nate Bross
    The objective here is to start a simple .NET application I've written which captures some environment variables (time, username, computername, etc) upon login. This .NET application subscribes to the Windows "User logout" event. Upon launch, the application captures the above variables, and creates a record in my database, upon logout (which I'm capturing) I update another field in the same record, with the logout time. The above is working exactly as I would like, when I launch the binary, it makes its initial log entry, then waits for the logout event and updates the same record. Restrictions, the .NET binary should be able to live on a share point (\server\share\myapp\v1) so I can update the application to (\server\share\myapp\v2) and simply update the GPO/Logon script. My initial thought was to use the \domaincontroller\sysvol\ directory to store the binary and then update all user accounts to include a call to my application. Can you see any flaws in this approach? My question is this: First, is there anything wrong with my idea above? Second, if so, what is the best way (through group policy or otherwise) to ensure this application launches whenever a session is started on a server?

    Read the article

  • Distributing Microsoft Office Template or Macro over the network

    - by zfranciscus
    We have around 400 users who use Word and we want to make their life easier by distributing templates and macros over the network. The easiest way to do this of course to setup a shared network folder and let them get the appropriate templates and macros. Of course, each user has to know where to copy these files to in their local PC, and we have to rely on constant email communication to let them know for newer version of the macro and templates. The next alternative is to ask them to configure Word to point to these network folder. But of course any disruption to the network means disruption to their work. We are thinking to setup a synchronization mechanism that downloads new templates to their local machine. We are also thinking to make this sync tool to prompt users that it will download new templates - you know just to give them visibility that they are receiving changes. We are wondering what is the best approach that people usually use in their workplaces ? Are there any specific tool that can make this task easier ?

    Read the article

  • saving data from a failing drive

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • How can I ensure that my static ip address is read from /etc/network/interfaces rather than dhcp?

    - by jonderry
    This is a follow up to the following question. I'm trying to set a static IP by changing /etc/network/interfaces to the following: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.2.133 netmask 255.255.255.0 gateway 192.168.2.1 dns-nameservers 8.8.8.8 and then running /sbin/ifdown eth0; /sbin/ifup eth0. However, the change in IP address doesn't appear to take effect without editing /etc/dhcp/dhclient.conf and commenting out the following before running ifdown; ifup: request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, dhcp6.name-servers, dhcp6.domain-search, netbios-name-servers, netbios-scope, interface-mtu, rfc3442-classless-static-routes, ntp-servers, dhcp6.fqdn, dhcp6.sntp-servers; Strangely, after commenting out this line, running ifdown; ifup works, but when I uncomment it, the behavior does not revert to the previous behavior of ignoring changes to my settings in /etc/network/interfaces (this doesn't seem like a problem, but I really need to be able to repeat this problem so that I can be confident that my solution is robust) Also, I'd rather not have to edit /etc/dhcp/dhclient.conf to change my static IP since it seems I should be able to do this by only editing interfaces. Can anyone explain the issues I'm seeing above and suggest the best way of making changes to static IP addresses take effect that admits reproducibility so that I can be sure that my approach works?

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • Are there tools available for trimming PDF margins?

    - by Charles Duffy
    I have an ebook I'm trying to read in PDF format on a Kindle. Unfortunately, the page headers and footers have some content (page number and copyright info, respectively) preventing the device from scaling the actual text to match its usable area viewing area, thus leaving the actual content too small to read. Various tools are available which will trim off whitespace, but the Kindle already does this; my goal, by contrast, is to remove printed matter outside of a defined bounding box, and the only tool I've found for the purpose is moderately expensive commercial software. I could probably generate a mask in Inkscape; split out the individual pages using pdftk, apply the mask to each page individually (outputting to postscript), and recombine the numerous postscript files into a single PDF. However, this decode/reencode steps would be pretty unfortunate in terms of document size; something able to operate with a bit more finesse would be ideal. I have all major operating systems handy (Windows, several modern Linux distros, a Mac, etc) so solutions don't need to be constrained by platform. Suggestions? (I've reported the issue to the author, who mentioned it to his editor, who hasn't done anything about the issue over the course of more than a month, making the zero-work approach evidently nonproductive).

    Read the article

  • Windows 7 product key, which is the valid one - in registry or on a sticker?

    - by me how
    I am not too familiar with the software licensing and how this all works, but I have a question regarding Windows 7 and partially Office - generally Microsoft products. I have been asked to assist our IT guy who wants to collect all the product IDs for Windows 7 and Office. I haven't been given much details how to go about it and how to collect it. After a bit of research I have decided to use a freeware that pulls the software licenses out of the registry. I thought that was the easiest and would provide the most accurate product IDs. I've used Belrac Avisor to obtain all the informations. It turns out that about 25 machines use the same product key. I have asked if the company has bought a commercial license or something but there isn't anyone available at the moment who could answer my question. I have told the IT guy that there are 25 machines using the same product key and asked if that is alright. He told me to go around and write the product keys from the sticker(label) on each machine. I am just not quite sure if that's the right approach specially that the numbers do not match.... So, now I see that the numbers aren't matching and my question is in terms of software licensing which is the VALID and correct product key to provide if ever questioned about software license? Is it the number on the sticker or is it the number stored in the registry?

    Read the article

  • Verification of downloaded package with rpm

    - by moooeeeep
    I wanted to install a package on CentOS 6 via rpm (e.g., the current epel-release). EDIT: Of course I would always prefer the installation via yum but somehow I failed to get that specific package installed using this normal approach. As such, the EPEL/FAQ recommends Version 2. As I'm downloading the package through an insecure channel (http) I wanted to make sure that the integrity of the file is verified using information that is not provided with the downloaded file itself. Is this especially true for all of these approaches? I've seen various approaches to this on the internet: Version 1 rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 2 rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 3 wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm rpm --import https://fedoraproject.org/static/0608B895.txt rpm -K epel-release-6-7.noarch.rpm rpm -i epel-release-6-7.noarch.rpm I do not know rpm very well, so I wondered how they might differ? My guess (after reading the manpage) is that the first should only be used when the package is previously not installed, the second would additionally remove previous versions of the package after installation, the first two omit some verification steps before the actual installation that are done by rpm -K. So my main questions at this point are Are my guesses correct or am I missing something? Is the rpm --import ... implicitly done for the first two approaches as well, and if not, isn't it necessary to do so after all? Are these additional checks performed by rpm -K ... any relevant? What is the best (most secure, most reliable, most maintainable, ...) way of installing packages via rpm in general?

    Read the article

  • Scripting around the lack of user:password@domain url functionality in jscript/IE

    - by Idiomatic
    I currently have a jscript that runs a php script on a server for me, dead simple. But... I want to be atleast somewhat secure so I setup a login. Now if I use the regular user:password@domain system it won't work (IE decided it was a security issue). And if I let IE just remember the password then it pops up a security message confirming my login every time (which kills the point of the button). So I need a way to make the security message go away. I could lower security settings, which tbh I am fine with but nothing seems to make it fuck off (there might be some registry setting to change). Find a fix for jscript that will let me use a password in the url. There used to be a regedit that worked for older systems which allowed IE to use url passwords (not working on my 64bit windows7 setup) though I doubt that'd have helped jscript anyways (since it outright crashes). Use an app other than IE. Inwhich case I'm not sure how to go about it, I want it to be responsive and invisible so IE was a good choice. It is near instant. Use XMLHttpRequest instead of IE directly? May even be faster but I've no idea if it'd help or just have the same error. Use a completely different approach. Maybe some app that can script website browsing. var args = {}; var objIEA = new ActiveXObject("InternetExplorer.Application"); if( WScript.Arguments.Item(0) == "pause" ){ objIEA.navigate("http://domain/index.html?pause"); } if( WScript.Arguments.Item(0) == "next" ){ objIEA.navigate("http://domain/index.html?next"); } objIEA.visible = false; while(objIEA.readyState != 4) {} objIEA.quit();

    Read the article

  • Can I set up arbitrary filesystem redirection in Windows?

    - by Jon
    I am sitting in front of a Windows 7 machine that has no drive Q:. Is it possible to arrange for accesses to Q:\somedir to be redirected to an arbitrary location on the existing filesystems (for example, C:\Windows)? I would especially like a "set it and forget it" option, if one exists. I am assuming (although I have not tried it) that it is possible to use SUBST to mount an existing (empty, created for this purpose) folder as drive Q: and then MKLINK /J to create a directory symbolic link from Q:\somedir to wherever I want. However, this approach has a couple of drawbacks that I would like to avoid if possible: The drive Q: will be visible in the system. It is not as clean as I would like (removing the mounted folder will break it; a batch script needs to be manually added to the system startup). Is there a better option? If there is none and I am forced to make compromises, what is the closest I could get to the ideal solution? Assume anything is up for discussion.

    Read the article

  • Mysql: create index on 1.4 billion records

    - by SiLent SoNG
    I have a table with 1.4 billion records. The table structure is as follows: CREATE TABLE text_page ( text VARCHAR(255), page_id INT UNSIGNED ) ENGINE=MYISAM DEFAULT CHARSET=ascii The requirement is to create an index over the column text. The table size is about 34G. I have tried to create the index by the following statement: ALTER TABLE text_page ADD KEY ix_text (text) After 10 hours' waiting I finally give up this approach. Is there any workable solution on this problem? UPDATE: the table is unlikely to be updated or inserted or deleted. The reason why to create index on the column text is because this kind of sql query would be frequently executed: SELECT page_id FROM text_page WHERE text = ? UPDATE: I have solved the problem by partitioning the table. The table is partitioned into 40 pieces on column text. Then creating index on the table takes about 1 hours to complete. It seems that MySQL index creation becomes very slow when the table size becomes very big. And partitioning reduces the table into smaller trunks.

    Read the article

  • running a laptop continuously

    - by intuited
    I have an experienced laptop — a Dell Latitude D400, with a Pentium M CPU — that I'd like to run as an always-on server. This model was launched in 2004; I got mine second-hand in about 2007. I've heard that continuous operation is generally not a good idea with consumer hardware, but am lacking in specific knowledge about related problems, and have little idea of how much such usage patterns would reduce the lifespan of the machine. I'm mostly concerned with the unit's core components; parts such as the hard drive which are readily replaceable are, well, readily replaceable. What sorts of things can I do to increase the lifespan of this machine under such circumstances? For example, I'm guessing that it would be wise to limit the CPU frequency or take other steps to keep the internal temperature low. However, I'm not sure where the point of diminishing returns would lie with such an approach — 50°C? 40°C? Would it be useful to suspend the machine periodically, for perhaps an hour each day, or a few hours each week?

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

  • Choosing local versus public domain name for Active Directory

    - by DSO
    What are the pros and cons of choosing a local domain name such as mycompany.local versus a publicly registered domain name such as mycompany.com (assuming that your org has registered the public name)? When would you choose one over the other? UPDATE Thanks to Zoredache and Jay for pointing me to this question, which had the most useful responses. That also led me to find this Microsoft Technet article, which states: It is best to use DNS names that are registered with an Internet authority in the Active Directory namespace. Only registered names are guaranteed to be globally unique. If another organization later registers the same DNS domain name, or if your organization merges with, acquires, or is acquired by other company that uses the same DNS names, then the two infrastructures cannot interact with one another. Note Using single label names or unregistered suffixes, such as .local, is not recommended. Combining this with mrdenny's advice, I think the right approach is to use either: Registered domain name that will never be used publicly (e.g. mycompany.org, mycompany.info, etc). Subdomain of an existing public domain name which will never be used publicly (e.g. corp.mycompany.com). The "never used publicly" part is a business decision so its probably best to get sign off from those in the company authorized to reserve domain names and subdomains. E.g. you don't want to use a registered name or subdomain that the marketing dept later wants to use for some public marketing campaign.

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >