Search Results

Search found 21463 results on 859 pages for 'broken link'.

Page 708/859 | < Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >

  • PXELinux and compressed kernels/images

    - by Yvan JANSSENS
    Is it possible to boot compressed kernels with a compressed initrd with PXELinux? First, a little background: We created a custom Linux distro, for diskless OpenCL computing nodes. We want those nodes to fetch their OS from the network. Our Distro is composed out of a kernel (duh) and a large initrd which is loaded into RAM and everything is executed from there. We chose to run everything off the initrd for two reasons: NFS was not an option to serve the filesystem's extra contents Fast file access from RAM. No persistent storage needed, data and config is pulled dynamically through a SOAP service. Now our initrd is about 450M in size. At our network speeds, it takes about two to three minutes to load a single client. Will compression speed up te downloading, and if yes, which one should be used? Is LZMA supported by PXELinux, or do we need to stick to bzip2 or gzip? Because of the 2-3 minutes loading time, booting 15 nodes over the same network link takes quite a lot of time. We decided not to use hard drives or CD/DVD drives, for financial reasons (cheapest HDD @ €30 times 15 is a lot of money saved ;-) ) So, our question is: what compression options are available for this setup? And how do we do this? Thank you for your time! Yvan Janssens

    Read the article

  • Windows 7 Explorer: how to show total size of all files in current folder?

    - by matt wilkie
    In Windows XP Explorer one can turn on Status Bar which shows, among other things, the total size of all the files in the current folder, or if the cumulative size of the selected files. How do I get the same at-a-glance information in Windows 7? Selecting files doesn't count as it stops after 15 files, and it's rare that I'm concerned about total size with that few files (it's pretty easy to estimate in my head). thanks. UPDATE: Information derived from the context menu (select r-click properties) isn't "at a glance", and not as smooth as selecting files and clicking the details link at the bottom in any case. Thank you for fleshing out more of the available routes though. Yes Q19232 is similar to this one, though it is not a duplicate. That question is about looking for easy free-space on disk stats and this one is easy used-space by contents of this folder stats. The answer for both is the same though. You can't! Hopefully someone will figure how to get this lost feature back with a shell extension or something.

    Read the article

  • Getting dwl-g122 to work on ubuntu

    - by User1
    I have a USB WiFi adapter, D-Link dwl-g122. I'm running Ubuntu 10.4. My laptop has a built-in wireless card that is connecting fine to the router. I plug in the usb and it never really connects. Here are some details: iwconfig wlan1 IEEE 802.11bg ESSID:"\x0B\xE1..." Mode:Managed Frequency:2.457 GHz Access Point: Not-Associated Tx-Power=19 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on lshw -c network: *-network:1 description: Wireless interface physical id: 3 logical name: wlan1 serial: 00:13:46:8b:xx:xx capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=IEEE 802.11bg dmesg [ 1096.814176] wlan1: direct probe to AP xxx (try 1) [ 1096.820960] wlan1: direct probe responded [ 1096.820969] wlan1: authenticate with AP xxx (try 1) [ 1096.823790] wlan1: authenticated [ 1096.823869] wlan1: associate with AP xxx (try 1) [ 1096.827667] wlan1: RX AssocResp from xxx (capab=0x411 status=0 aid=1) [ 1096.827674] wlan1: associated [ 1142.590912] wlan1: deauthenticating from xxx by local choice (reason=3) lsmod|rt2 rt2500usb 19643 0 rt2x00usb 11260 1 rt2500usb rt2x00lib 32133 2 rt2500usb,rt2x00usb mac80211 238896 3 ath5k,rt2x00usb,rt2x00lib cfg80211 148725 4 ath5k,ath,rt2x00lib,mac80211 led_class 3764 3 ath5k,rt2x00lib,sdhci It looks like the driver loads but it doesn't feel like connecting. The behavior is identical even if I blacklist the other wifi card (using an ath5k driver). It's almost like it is using the wrong password or something. Does anyone know what is happening? Is anyone using Ubuntu successful?

    Read the article

  • Doesn't VirtualBox 4.0 support drag-drop file copy yet?

    - by Benjamin
    Version 4.0.0 will be new major release. The following major new features were added: -New settings/disk file layout for VM portability; see the manual for more information. -Open Virtualization Format Archive (OVA) support; see the manual for more information. -VMM: support more than 1.5/2 GB guest RAM on 32-bit hosts -Language bindings: uniform Java bindings for both local (COM/XPCOM) and remote (SOAP) -invocation APIs -Chipset: added support for the Intel ICH9 chipset with 3 PCI buses, PCI express and -Message Signaled Interrupts (MSI) -Audio: Intel HD Audio is now available as guest hardware, for better support with modern -guest operating systems (e.g. 64-bit Windows; bug #2785). -GUI: redesigned user interface with guest window preview -GUI: new display mode with downscaled guest display -Resource control: added support for limiting a VM's CPU time and IO bandwidth. -Storage: support asynchronous I/O for iSCSI, VMDK, VHD and Parallels images -Storage: support for resizing VDI and VHD images -Windows Additions: support for automatically updating the Guest Additions (requires -installed Windows Guest Additions 4.0 or later) -Guest Additions: support for copying files into the guest file system What does the last line mean? I thought this is a drag-drop file copy feature like VMWare. I tried that. But I couldn't copy by drag-drop, ctrl-c ctrl-v either. Edit: I mean VBox 4.0 beta, not 3.x The release note is here. Download link is here.

    Read the article

  • need to bring back win 7

    - by user290513
    I like making music and playing games and occasionally do some Photoshop. I had a windows 8 computer but my mouse pointer always got stuck, so to try out something new I installed Ubuntu. here is how I installed it: Went to advanced statup options clicked on "use a device" after plugging in my bootable USB with Ubuntu replaced my windows 8 and installed Ubuntu 14.04 LTS I hope I did it correctly though. So after a few months I could've really find out a good Audio Production (not LMMS, because I use Stagelight) software nor something that could be familiar to the UI of Photoshop. So I decided to bring back Windows, but because of the bad experience of 8 I thought about bringing back win 7 So I used an app named WinUSB to make my bootable USB drive after formatting it to NTFS in GParted But when I go to my grub menu, my USB doesn't show up and my PC being a UEFI device. I don't know how to get to the bios of my device. Can somebody tell how to install Windows 7 completely and deleting Ubuntu or at least give me a link to a tutorial. I have a netbook: it is an Acer Aspire One 725. I'm fine with using commands in terminal and another thing that my laptop doesn't have a CD drive or reader, I can't put a CD inside

    Read the article

  • Is it bad to redirect http to https?

    - by jasondavis
    I just installed an SSL Certificate on my server. I use a web hosting panel called ZPanel that is an open source project. It then set up a redirect for all traffic on my domain on Port 80 to redirect it to Port 443. In other words, all my http://example.com traffic is now redirected to the appropriate https://example.com version of the page. The redirect is done in my Apache Virtual Hosts file with something like this... RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] My question is, are there any drawbacks to using SSL? Since this is not a 301 Redirect, will I lose link juice/ranking in search engines by switching to https? I appreciate the help. I have always wanted to set up SSL on a server, just for the practice of doing it, and I finally decided to do it tonight. It seems to be working well so far, but I am not sure if it's a good idea to use this on every page. My site is not eCommerce and doesn't handle sensitive data; it's mainly for looks and the thrill of installing it for learning. UPDATED ISSUE Strangely Bing creates this screenshot from my site now that it is using HTTPS everywhere...

    Read the article

  • Route all traffic of home network through VPN

    - by user436118
    I have a typical semi advanced home network scenario: A cable modem - eth A wireless router (netgear n600) eth and wlan A home server (Running ubuntu 12.04 LTS, connected over wlan) A bunch of wireless clients (wlan) Lying around I have anoher cheaper wlan router, and two different USB wlan NIC's that are known to work with Linux. ACTA struck. I want to route ALL of my WAN traffic through a remote server through a VPN. For sake of completition, lets say there is a remote server running debian sqeeze where a VPN server is to be installed. The network is then to behave so that if the VPN is not operative, it is separated from the outside world. I am familiar with general system/network practices, but lack the specific detailed knowledge to accomplish this. Please suggest the right approach, packages and configurations you'd use to reach said solution. I've also envisioned the following network configuration, please improve it if you see fit: ==LAN== Client ip:10.1.1.x nm:255.0.0.0 gw:10.1.1.1 reached via WLAN Wlan router 1: ip: 10.1.1.1 nm:255.0.0.0 gw: 10.10.10.1 reached via ETH Homeserver: <<< VPN is initiated here, and the other endpoint is somewhere on the internet. eth0: ip:10.10.10.1 nm: 0.0.0.0 gw:192.168.0.1 reached via WLAN Homeserver: wlan0: ip: 192.168.0.2 nm: 255.255.255.0 gw: 192.168.0.1 reached via WLAN ==WAN== Wlan router 2: ip: 192.168.0.1 nm: 0.0.0.0 gw: set via dhcp uplink connector: cable modem Cable Modem: Remote DHCP. Has on-board DHCP server for ethernet device that connects to it, and only works this way. All this WLAN fussery is because my home server is located in a part of the house where a cable link isnt possible unfortunately.

    Read the article

  • Is it possible to use rsync over sftp (without an ssh shell) ?

    - by Tom Feiner
    Rsync over ssh, works great every time. However, trying to rsync to a host which allows only sftp logins, but not ssh logins, provides the following error: rsync -av /source ssh user@remotehost:/target/ protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6] Here's the relevant section from the rsync man page: This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most com- mon cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. Trying this on my system produced the following in out.dat: ssh-dummy-shell: Command not allowed. As I thought, the host is not allowing ssh logins. The following link shows that it is possible to accomplish this task using fuse with sshfs - however it is extremely slow, and not fit for production use. Is there any chance of getting rsync sftp to work?

    Read the article

  • iptables : how to allow incoming ftp traffic?

    - by logansama
    Hi, Still fighting my way through the jungle that is called iptables. I have managed to allow FTP access outside of our LAN: both these would work. NOTE: eth0 is the LAN interface and eth1 is the WAN interface. iptables -t filter -A FORWARD -i eth0 -p tcp --dport 20:21 -j ACCEPT or iptables -A FORWARD -i eth0 -o eth1 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But when i connect to a external FTP server i manage to log in and all is fine until it wishes to List the directory content. Then nothing happens as the data is blocked, due to the fact that i do not have a rule set up to allow it! (my last rule on the FORWARD chain is to block all traffic) I have tried a gazillion rules (many of which i did not understand) to try and allow the FTP traffic back through my server. One such rule for example was: iptables -A FORWARD -i eth1 -o eth0 -p tcp --sport 20:21 --dport 1024:65535 -j ACCEPT But i cannot get the List to work. It just times out after a while. Would anyone perhaps know how to build a rule which would allow FTP to List / allow such traffic back? Or have a link to sources i could work through? Thank you,

    Read the article

  • Restore a database with LDF file only

    - by Martin
    First of all, i know how stupid it is not to have a any backup. I can't help it, but i have to (try) to solve it. I have a transaction log (LDF) file from a SQL Server 2000 database that contains all transactions since the creation of the database. No truncation has been done. The MDF file is gone. Probably because of some disk failure. There is no backup. Not from the original database and not from the transaction log. I have tried to link the transaction log to a new clean database. But (ofcourse) that failed because SQL Server checks the identity of both files. I have read about software that can read the transaction log. ApexSQL seems to do that. I tried to install the trial version but it gives weird errors when trying to start the program. Anyone knows a solution for me? It may contain third party software, but i prefer a clean SQL Server solution.

    Read the article

  • How do I get Tomcat 7 to start up faster in Linux CentOS kernel version 2.6.18?

    - by user1786833
    I am experiencing a problem with slow start up times for Tomcat 7. I have done some testing by tweaking configuration parameters both on Linux CentOS kernel version 2.6.18 and on Windows 7 using this link as my primary guide: http://wiki.apache.org/tomcat/HowTo/FasterStartUp and managed only a modest improvement. The improvements seemed to result when I added metadata-complete="true" attribute to the element of my WEB-INF/web.xml file and when I added the names of almost all the jars we use for our application to the tomcat.util.scan.DefaultJarScanner.jarsToSkip property in conf/catalina.properties file. I've also used this JAVA_OPTS in the setenv.sh file: JAVA_OPTS="$JAVA_OPTS -server -Xms1536m -Xmx1536m -XX:MaxPermSize=256m -XX:NewRatio=2 -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000 -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true " but actually saw my start up times increase slightly. Our QA and production environments are on Linux CentOS so I'm hoping to get more information on improving Tomcat 7 start up times in that environment. My primary role is java developer and I don't have much system administration experience so I appreciate any input. Thank you for your time and suggestions.

    Read the article

  • How to have LiveJournal delegate my OpenID to something else?

    - by T-Boy
    I understand that if I have full control over my domain, I can set it up so that I can delegate the task of authenticating to another OpenID service provider. The problem is, what I'd like to do is to get the LiveJournal server to pass the authentication to someone else, instead of having LJ doing it. Preferably what I'd like to do is get LiveJournal, when asked by a web site, say, "No, I don't do it anymore -- go to this address". The plan was that this address would then be in a domain I fully control, which then would pass it on to whichever service provider I choose. I don't even know if I've gotten my understanding of OpenID right, if all this shenanigans are necessary, if my question makes sense, or if it's even possible with a service provider like Livejournal. ETA: Doing a little more reading up, and examining the source for my LiveJournal user page, I note that this particular line in the file's <header> area: <link rel="openid.server" href="http://www.livejournal.com/openid/server.bml" /> I suspect that changing this will allow me to forward OpenID requests to whomever I wish, I think; so far so good. Now comes the hard part -- figuring out how to change all of that using LiveJournal's customization options, if that is at all possible (here's hoping I don't need to pay to get that functionality).

    Read the article

  • How do I set up Tomcat 7's server.xml to access a network share with an different url?

    - by jneff
    I have Apache Tomcat 7.0 installed on a Windows 2008 R2 Server. Tomcat has access to a share '\server\share' that has a documents folder that I want to access using '/foo/Documents' in my web application. My application is able to access the documents when I set the file path to '//server/share/documents/doc1.doc'. I don't want the file server's path to be exposed on my link to the file in my application. I want to be able to set the path to '/foo/Documents/doc1.doc'. In http://www3.ntu.edu.sg/home/ehchua/programming/howto/Tomcat_More.html under 'Setting the Context Root Directory and Request URL of a Webapp' item number two says that I can rename the path by putting in a context to the server.xml file. So I put <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> <Context path="/foo" docBase="//server/share" reloadable="false"></Context> </Host> The context at the bottum was added. Then I tried to pull the file using '/foo/Documents/doc1.doc' and it didn't work. What do I need to do to get it to work correctly? Should I be using an alias instead? Are there other security issues that this may cause?

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Synchronize folder on network, preserving hard links

    - by Waleed Hamra
    I have few computers using Windows XP Pro. I want to synchronize/back a folder from one machine, to another one. This far, It's a simple problem, and I've used FreeFileSync for such operations, with very satisfactory results. But, this all changes when hard links come into play. Today's folder contains lots of hard links, using such backup programs will result in hard links being treated as multiple files, and copied as such, greatly increasing folder size on destination, and defeating the purpose of using all these hard links in the first place. It gets more complicated when we take into consideration the fact that network shares on Windows DON'T expose hard linking facilities, meaning that running a hard-link-aware tool like rsync using --hard-links will be of no use. So my question, how can i backup my folder to the other computer, while preserving hard links? I don't mind installing 3rd party tools to do it, as obviously, the standard windows shares approach won't work... I am guessing there might be some tool that can be installed on both machines and works in a server/client mode? anyone has any idea how to do this?

    Read the article

  • Possible for linux bridge to intercept traffic?

    - by A G
    I have a linux machine setup as a bridge between a client and a server; brctl addbr0 brctl addif br0 eth1 brctl addif br0 eth2 ifconfig eth1 0.0.0.0 ifconfig eth2 0.0.0.0 ip link set br0 up I also have an application listening on port 8080 of this machine. Is it possible to have traffic destined for port 80 to be passed to my application? I have done some research and it looks like it could be done using ebtables and iptables. Here is the rest of my setup: //set the ebtables to pass this traffic up to ip for processing; DROP on the broute table should do this ebtables -t broute -A BROUTING -p ipv4 --ip-proto tcp --ip-dport 80 -j redirect --redirect-target DROP //set iptables to forward this traffic to my app listening on port 8080 iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 8080 --tproxy-mark 1/1 iptables -t mangle -A PREROUTING -p tcp -j MARK --set-mark 1/1 //once the flows are marked, have them delivered locally via loopback interface ip rule add fwmark 1/1 table 1 ip route add local 0.0.0.0/0 dev lo table 1 //enable ip packet forwarding echo 1 > /proc/sys/net/ipv4/ip_forward However nothing is coming into my application. Am I missing anything? My understanding is that the target DROP on the broute BROUTING chain will push it up to be processed by iptables. Secondly, are there any other alternatives I should investigate? Edit: IPtables gets it at nat PREROUTING, but it looks like it drops after that; the INPUT chain (in either mangle or filter) doesn't see the packet.

    Read the article

  • Take a regular Windows 7 clone with clonezilla (device-to-image)

    - by Mario De Schaepmeester
    I am unexperienced with cloning software and I've decided to use Clonezilla as it seemed best as freeware. I chose device image and left most options standard. I chose expert mode anyway to see what I could configure, and decided to try the lzop algorithm instead of the default one for compression. The rest was left at default. When Clonezilla asked me which partitions to clone (I chose parts to image), I chose the C:\ drive but Windows 7 also creates a 100MB partition on setup for system files (the actual boot partition?). I copied that into the image as well. The reason I didn't choose disk to image is that I also have a data partition that needs to stay intact. Now I'm simply not sure that this is the way to go, should I ever need to restore my disk image. Will Clonezilla know what to do with both partitions and will Windows 7 work perfectly after restoring? Edit: apparantly a similar question has been asked before. The link to the first article in the answer is not relevant to me since it covers a device-to-device clone. It appears the windows installation disk can repair the 100MB partition. As for Clonezilla, it copies "hidden data after the MBR" by default too. I don't know, I feel I'll be allright whether by restoring the partition with Clonezilla or repairing it with the Windows 7 disk.

    Read the article

  • Basic connectivity issues between Win 7 and XP mixed wired/wireless network.

    - by Pulse
    Setup: Windows 7 x64 Ultimate desktop hard wired to Asus WL500gp router (WL500gpv2-1.9.2.7-d-r1445 firmware) Several Bridged VirtualBox VM's running XP, 7, ubuntu server 10.04, Mint 9 and SuSE 11.2 Win XP Pro SP3 notebook with D-Link Airplus wireless network card. No firewall or other security software currently running on either platform (at least for the duration of the test) Situation: Router is acting DHCP server Clients are receiving correct addresses and additional parameters Internet connectivity is available from all clients Windows 7 sharing is set to Network type = work (not home group) NetBT is disabled on all clients using smb over TCP What I can do: I can ping the router and internet addresses from the wireless XP notebook I can ping the Win 7 desktop and any VM from the XP wireless notebook I can ping all devices from the router All VM's and 7 can ping each other and the router as well as Internet addresses What I can't do: I cannot ping the XP wireless notebook from either The Win 7 desktop or the VM's it alwats returns a destination host unreachable. Tracert resolves the name or the XP notebook but also returns a destination host unreachable. From the above it would seem that something is blocking connectivity in a single direction (from the Win 7 box to the Win XP notebook) only but the router can ping the XP notebook. Some fresh input would be most welcome, as this is beginning to drive me batty. Thanks

    Read the article

  • Splitting an HTTP request into multiple byte-range requests

    - by redpola
    I have arrived at the unusual situation of having two completely independent Internet connections to my home. This has the advantage of redundancy etc but the drawback that both connections max out at about 6Mb/s. So one individual outbound http request is directed by my "intelligent gateway" (TP-LINK ER6120) out over one or the other connection for its lifetime. This works fine over complex web pages and utilises both external connects fine. However, single-http-request downloads are limited to the maximum rate of one of the two connections. So I'm thinking, surely I can setup some kind of proxy server to direct all my http requests to. For each incoming http request, the proxy server will issue multiple byte-range requests for the desired data and manage the reassembly and delivery of that data to the client's request. I can see this has some overhead, and also some edge cases where there will be blocking problems waiting for data. I also imagine webmasters of single-servers would rather I didn't hit them with 8 byte-range requests instead of one request. How can I achieve this http request deconstruct/reconstruction? Or am I just barking mad?

    Read the article

  • after building in more ram, bios/debian does nothing [closed]

    - by derty
    My private server has 2x1gb Ram working with a 64bit Debian and an Q6600 Intel. This runs 2 virtual mashines on it each one recives 512mb RAM. Which you can immagine is a bit tight for the hole system. Now i got 2x2gb ram from a friend. I'm not sure if thery are clocking at the same speed, but i'm sure my power adaptor is not on his limit and can handle that. So there are 2 ram sockets left at the mainboard. I shuted down the system and build in the 4 gigs and looked what happen. After pressing the start button, everything gets noisy as known BUT the screen shows nothing, not even the bios stuff it usally does. Why isn't the system booting? I can immagine that it does not direkt link between booting debian and the bios not showing a thing. Or is it Grub? I mounted the system disk, and i can see it, switch folders write stuff with "vim" it does not seems like there is a problem.

    Read the article

  • Why database partitioning didn't work? Extract from thedailywtf.com

    - by questzen
    Original link. http://thedailywtf.com/Articles/The-Certified-DBA.aspx. Article summary: The DBA suggests an approach involving rigorous partitioning, 10 partitions per disk (3 actual disks and 3 raid). The stats show that the performance is non-optimal. Then the DBA suggests an alternative of 1 partition per disk (with more added disks). This also fails. The sys-admin then sets up a single disk, single partition and saves the day. The size of disks was not mentioned but given today,s typical disk sizes (of the order of 100 GB), the partitions ; would be huge, it surprises me that a single disk with all partitions outperformed. Initially I suspect that the data was segregated and hence faster reads. But how come the performance didn't degrade as time went by with all the inserts and updates happening? Saw this on reddit, but the explanation was by far spindle/platter centered. There was no mention in the article about this. Is there any other reason? I can only guess that the tables were using a incorrect hash distribution causing non-uniform allocation across disks (wrong partitioning); this would increase fetch times. Any thoughts?

    Read the article

  • Is it possible to create a simple frontend indexer for openbittorent torrents?

    - by SimonK
    I run a website which distributes a few files every now and again, live music performances by a rock band. I create a torrent file, set the trackers as openbittorrent, publicbt and other similar open trackers. I upload the torrent file to my forum, my users download it and the files are shared. No problems there. What I would like to do is index those torrents properly on my website though so I can follow seeders/leechers and other stats online. I know the open torrent trackers don't have an index but I am aware of many, many indexing sites that do that exact thing. I don't know how though. So what I'm asking is what do I need to do to do that myself? I simply want to create a page that lists the torrents I and other users on my site create, the seeders/leechers ratio and a link to the torrent file etc. What data do I need to be able to do that? I'm proficient in general web design but I don't know what I would need data wise to pull the required info on the torrents? Thanks

    Read the article

  • Certificates in SQL Server 2008

    - by Brandi
    I need to implement SSL for transmissions between my application and Sql Server 2008. I am using Windows 7, Sql Server 2008, Sql Server Management Studio, and my application is written in c#. I was trying to follow the MSDN page on creating certificates and this under 'Encrpyt for a specific client', but I got hopelessly confused. I need some baby steps to get further down the road to implementing encryption successfully. First, I don't understand MMC. I see a lot of certificates in there... are these certificates that I should be using for my own encryption or are these being used for things that already exist? Another thing, I assume all these certificates are files are located on my local computer, so why is there a folder called 'Personal'? Second, to avoid the above issue, I did a little experiment with a self-signed assembly. As shown in the MSDN link above, I used SQL executed in SSMS to create a self-signed certificate. Then I used the following connection string to connect: Data Source=myServer;Initial Catalog=myDatabase;User ID=myUser;Password=myPassword;Encrypt=True;TrustServerCertificate=True It connected, worked. Then I deleted the certificate I'd just created and it still worked. Obviously it was never doing anything, but why not? How would I tell if it's actually "working"? I think I may be missing an intermediate step of (somehow?) getting the file off of SSMS and onto the client? I don't know what I'm doing in the least bit, so any help, advice, comments, references you can give me are much appreciated. Thank you in advance. :)

    Read the article

  • Backtrack, Wi-Fi not working

    - by hradecek
    I've installed Backtrack 5R3 KDE, and I realized that my wireless is not working, but wired is working fine. Here's the lshw output: *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 05 serial: 04:7d:7b:b7:46:f8 size: 100MB/s capacity: 100MB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8105e-1.fw ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100MB/s resources: irq:42 ioport:2000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff lspci output: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:14.0 USB Controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1d.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05)

    Read the article

  • Help with routing table

    - by user68752
    I have tried to find the answer to my question but not really found a clean and easy solution. I have a box (Ubuntu headless 10.04.1 server, with one Ethernet port) on LAN behind a router (running m0n0wall), that I have successfully installed a PPTP device (ppp0) on, this is working flawlessly (following this link) The thing is I want this box to route all it's internet traffic through the VPN tunnel (ppp0 device) but also being able to access the local LAN on 192.168.1.* subnet. I've succeeded a bit with this, but my problem right now is that I have port forwards (e.g. SSH) done in the m0n0wall pointing to this specific box which forces me to do "add routes" to all boxes that want to access this machine through this specific port. For instance a machine with ip xyz.xyz.xyz.xyz needs to have a static route setup in the routing table on the box to be able to access the box. This is the result of route -n xxx.xxx.137.2 192.168.1.1 255.255.255.255 UGH 0 0 0 eth0 xxx.xxx.137.2 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 yyy.yyy.0.0 192.168.1.1 255.255.0.0 UG 0 0 0 eth0 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 Where xxx is the IPs provided from VPN server. yyy.yyy.0.0 is a net that i want to have access to the box, without this I can't access the box from outside the LAN (via port-forwards done in router software, m0n0wall) is there away round this ugly solution?

    Read the article

< Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >