Search Results

Search found 34141 results on 1366 pages for 'even mien'.

Page 235/1366 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Sharing Bandwidth and Prioritizing Realtime Traffic via HTB, Which Scenario Works Better?

    - by Mecki
    I would like to add some kind of traffic management to our Internet line. After reading a lot of documentation, I think HFSC is too complicated for me (I don't understand all the curves stuff, I'm afraid I will never get it right), CBQ is not recommend, and basically HTB is the way to go for most people. Our internal network has three "segments" and I'd like to share bandwidth more or less equally between those (at least in the beginning). Further I must prioritize traffic according to at least three kinds of traffic (realtime traffic, standard traffic, and bulk traffic). The bandwidth sharing is not as important as the fact that realtime traffic should always be treated as premium traffic whenever possible, but of course no other traffic class may starve either. The question is, what makes more sense and also guarantees better realtime throughput: Creating one class per segment, each having the same rate (priority doesn't matter for classes that are no leaves according to HTB developer) and each of these classes has three sub-classes (leaves) for the 3 priority levels (with different priorities and different rates). Having one class per priority level on top, each having a different rate (again priority won't matter) and each having 3 sub-classes, one per segment, whereas all 3 in the realtime class have highest prio, lowest prio in the bulk class, and so on. I'll try to make this more clear with the following ASCII art image: Case 1: root --+--> Segment A | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment B | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment C +--> High Prio +--> Normal Prio +--> Low Prio Case 2: root --+--> High Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Normal Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Low Prio +--> Segment A +--> Segment B +--> Segment C Case 1 Seems like the way most people would do it, but unless I don't read the HTB implementation details correctly, Case 2 may offer better prioritizing. The HTB manual says, that if a class has hit its rate, it may borrow from its parent and when borrowing, classes with higher priority always get bandwidth offered first. However, it also says that classes having bandwidth available on a lower tree-level are always preferred to those on a higher tree level, regardless of priority. Let's assume the following situation: Segment C is not sending any traffic. Segment A is only sending realtime traffic, as fast as it can (enough to saturate the link alone) and Segment B is only sending bulk traffic, as fast as it can (again, enough to saturate the full link alone). What will happen? Case 1: Segment A-High Prio and Segment B-Low Prio both have packets to send, since A-High Prio has the higher priority, it will always be scheduled first, till it hits its rate. Now it tries to borrow from Segment A, but since Segment A is on a higher level and Segment B-Low Prio has not yet hit its rate, this class is now served first, till it also hits the rate and wants to borrow from Segment B. Once both have hit their rates, both are on the same level again and now Segment A-High Prio is going to win again, until it hits the rate of Segment A. Now it tries to borrow from root (which has plenty of traffic spare, as Segment C is not using any of its guaranteed traffic), but again, it has to wait for Segment B-Low Prio to also reach the root level. Once that happens, priority is taken into account again and this time Segment A-High Prio will get all the bandwidth left over from Segment C. Case 2: High Prio-Segment A and Low Prio-Segment B both have packets to send, again High Prio-Segment A is going to win as it has the higher priority. Once it hits its rate, it tries to borrow from High Prio, which has bandwidth spare, but being on a higher level, it has to wait for Low Prio-Segment B again to also hit its rate. Once both have hit their rate and both have to borrow, High Prio-Segment A will win again until it hits the rate of the High Prio class. Once that happens, it tries to borrow from root, which has again plenty of bandwidth left (all bandwidth of Normal Prio is unused at the moment), but it has to wait again until Low Prio-Segment B hits the rate limit of the Low Prio class and also tries to borrow from root. Finally both classes try to borrow from root, priority is taken into account, and High Prio-Segment A gets all bandwidth root has left over. Both cases seem sub-optimal, as either way realtime traffic sometimes has to wait for bulk traffic, even though there is plenty of bandwidth left it could borrow. However, in case 2 it seems like the realtime traffic has to wait less than in case 1, since it only has to wait till the bulk traffic rate is hit, which is most likely less than the rate of a whole segment (and in case 1 that is the rate it has to wait for). Or am I totally wrong here? I thought about even simpler setups, using a priority qdisc. But priority queues have the big problem that they cause starvation if they are not somehow limited. Starvation is not acceptable. Of course one can put a TBF (Token Bucket Filter) into each priority class to limit the rate and thus avoid starvation, but when doing so, a single priority class cannot saturate the link on its own any longer, even if all other priority classes are empty, the TBF will prevent that from happening. And this is also sub-optimal, since why wouldn't a class get 100% of the line's bandwidth if no other class needs any of it at the moment? Any comments or ideas regarding this setup? It seems so hard to do using standard tc qdiscs. As a programmer it was such an easy task if I could simply write my own scheduler (which I'm not allowed to do).

    Read the article

  • Tried to join a workgroup but now can't loginto system

    - by Ali
    Help - I tried to join a workgroup but now all of a sudden I can't log into my system. I get an account not found error - also for some reason when I click the options button on the log in screen I can't see the domain option at all.. whats going on! I can't even log in using the adminstrator login or anything!!

    Read the article

  • Setting Timeouts: SQL Server 2008/IIS 7.5

    - by Julie
    We have recently migrated from a Win 2003/SQL Server 2000 system to Win 2008 64 bit R2, SQL Server 2008 R2. Our websites are in classic asp, and this can't be changed to another scripting language at this time. On the old server, if I got stuck in some kind of endless loop, the page would throw an error. On the new server, I have a page that has some sort of looping problem, that even though the SQL SP is called only once (and runs fine run as a query on the server) it pegs SQL server and therefore locks all of our websites. I'll get my code figured out, no biggie. But I need to make sure the server times out when this happens. (The page I'm working on runs fine with certain instances of the query, and locks with others using a different query variable. I can't have something like that sneak up on me on a page I haven't touched for three years.) I can't figure out how an SP that runs once on the server, from an ASP page, is tying up SQL server this way. It's obviously some sort of a timeout issue, but I can't figure out where/which timeout values to change. I actually have to remote desktop to the server and kill the process in SQL server. I'm afraid I'm a generalist, and server management is not my thing, even though it's my responsibility, so I am almost certain to have questions about any answer that I receive. How can I track this down? What settings do I need to change? More info: It's not SQL Server On our test site, I created an ASP file that just did an endless loop (do while 1=1) and had the same problem - the other websites wouldn't load - without SQL server being involved. So I think the reason the process was hanging is that the page wasn't timing out as it should, and so the connection to SQL was never closed. Killing the process in SQL server would reset the page somehow. For my intentional endless loop, I had to refresh the app pool to get rid of it. This points more to either IIS or the ASP settings. The ASP timeouts are set to whatever the default were when the server was first loaded. I still can't figure out why one file is locking up all websites, though. Again, that didn't happen on the old server.

    Read the article

  • Can I use excel to read barcodes and take me to a specific cell?

    - by Ben
    I work for a community group that holds an annual fund raiser for charity over a weekend. I am an excel user and am wanting to set it up so that I can assign a barcode on a card to a specific person. My hope is to be able to scan the barcode have it take me to a specific cell in the spread sheet so I can update the Commitment amount. and provide as much anonymity for our donors as possible. Can this even be done?

    Read the article

  • TV Playout software for Windows Server without DirectX?

    - by Joel Kennedy
    I am trying to find a TV Playout software (video mixer) for Windows Server 2003/2008 which can stream via rtmp to Justin.tv without the need for DirectX. The reason for this is that the server which would run the TV Playout software is hosted/virtualized via an ESXi server, and does not have DirectX support. Even if there was software that could just stream a playlist of video files to Justin.tv that would be a start. Thanks

    Read the article

  • ZigBee Maximum Bandwidth

    - by Kris
    What is the maximum rated bandwidth for ZigBee? I can't seem to find this information anywhere, not even on the ZigBee Alliance website at http://www.zigbee.org/ I did find some information elsewhere, but it dated back to 2004, so I'm guessing it's different now that it was 5 years ago. Thanks!

    Read the article

  • How to use offline mode in Safari

    - by Nathaniel
    So, I'm kind of falling in love with Safari 4 (sorry, Firefox). However, I'm the type who likes my browser cache. Doing a little bit of Googling, it seems Safari does have an offline mode like Firefox, Internet Explorer, and Opera (where you can view cached web pages offline), but I haven't found any way to activate it and just navigating to web pages with no net connection seems not to do it either. So, does Safari even really have an offline mode, and if so, how does one use it?

    Read the article

  • Don't see job schedule added by sp_add_jobschedule in SQL Mgmt UI

    - by Ariel
    I'm running a script like below on a SQL Server box and, even though it finishes correctly, then when, on SQL Mgmt UI, I right click on that job's properties, go to Schedules, I cannot see the schedule just added... what am I missing? (I'm using the right job_name param, etc) thanks! BEGIN TRY BEGIN TRAN EXEC msdb.dbo.sp_add_jobschedule @job_name = 'Job name', @name=N'Job schedule name', @enabled = 0, @freq_type=1, @active_start_date=20100525, @active_start_time=60000 COMMIT TRAN END TRY BEGIN CATCH SELECT ERROR_Message(), ERROR_Line(); ROLLBACK TRAN END CATCH

    Read the article

  • Puppet configuration file on Windows

    - by Jeff Storey
    I'm running puppet on windows as an admin (testing on windows 7, even though it is not officially supported). When I install puppet following the windows installation instructions, no puppet.conf file is generated in C:/ProgramData/PuppetLabs/puppet/etc. I can run puppet agent --genconfig to create one, but regardless of what values I put in there, it doesn't seem to respect them. Is this just a puppet/windows issue? Or am I doing something wrong?

    Read the article

  • How to disable "Attachment Execution Service" in Windows?

    - by netvope
    If I run an executable file downloaded from the Internet, Windows displays a warning that this file can potentially harm my computer. This happens even for files downloaded by Firefox (not just IE.) On networked drive, this seems to slow down program launch time a lot. From Wikipedia, I learned that the feature is called "Attachment Execution Service". How can I completely disable it? If this cannot be done, how can I instruct Firefox not to set the "downloaded" flag on the file?

    Read the article

  • Outlook 2003: open email in edit mode

    - by Eleasar
    Is there some macro (or C# code) to open an email automatically in edit mode? I know i can double click an email item, go to edit and edit message - but can i do this to automatically open an email in this way? Or even better if i could show emails in edit mode in the reading pane?

    Read the article

  • How is load average related to CPU utilization?

    - by Kaustubh P
    I am facing a load average of 3 since past 2 days. The CPU utilization is never above 40 % in all cases. Here are some screenshots of Server Density monitoring tool that I use. The process snapshot at the highest peak, @ 0:00 is as follows: And the process snapshot at the peak created at 12:00 is: My question is, even though CPU utilization is not 100 %, why am I facing a high average? PS: All snapshots are sorted by descending CPU utilization.

    Read the article

  • About the security of adding a signature to a PDF file

    - by ????
    We can add a "bitmap" or image signature to a PDF file, either by using Adobe Acrobat or by Mac's Preview app, but I wonder, besides always encrypting it with a password before sending it by email to the other party, how valid and secure is it? The reason is, if the signature is a bitmap, then there is nothing that prevents anybody copying and pasting that image to other documents, or even, if a cheque is written to anybody at all (such as to the landlord), then there is nothing that prevents the signature from being scanned and copied and pasted to any other PDF documents as well.

    Read the article

  • Is it recommend to use Windows XP System Restore?

    - by Stan
    I usually only enable system restore on OS drive. But even so, I rarely use it. Usually when got infected, system restore can't help resolving the issue. Besides got infected, I can't think of any case that requires system restore. So, is it recommend to enable it? Thanks.

    Read the article

  • Java compiler error: Can't open input server /Library/InputManagers/Inquisitor

    - by unknown (yahoo)
    I am trying to compile HelloWorld in Java under Mac OS X 10.6 (Snow Leopard) and I get this compiler error: java[51692:903] Can't open input server /Library/InputManagers/Inquisitor It happens when I am using terminal command javac and when I am trying to do this in NetBeans. I was trying to open folder "Inquisitor", but I have no access to folder, even if I login as root user. What is going on?

    Read the article

  • Router has traffic coming in, but it's not going anywhere. What can I do?

    - by dubRun
    Ok so I have a Linksys WRT-54G v4 running the latest version of DD-WRT (just downloaded it last week to try to fix the problem) There is consistently about 750kbs coming into the router but from what I can tell, it's not going anywhere inside the LAN or WLAN. I'm also having alot of network dropouts while I'm listening to music or watching video over the network. The traffic is coming in even if there are no computers on the network (turned them all off) What can I do to fix this problem? Here is a screenshot:

    Read the article

  • Cyberlink PowerDVD 9 on netbook

    - by marc_s
    I tried to install CyberLink's PowerDVD 9 on a friend's netbook. The installation went OK (even though the install screen is too big to fit and you can't see the "Next " buttons etc.), but once installed, PowerDVD 9 refuses to launch. It claims it requires at least 1024x768 resolution - the Acer netbook has 1024x600 :-( Any way / hack / trick to get PowerDVD9 to work anyway?? Couldn't it scale down to e.g. 800x600?

    Read the article

  • alternative filesystems for SSD

    - by freedrull
    I am tired of watching fsck check my filesystem when my eeepc 901 shuts down abruptly due to a crash. I know that with a journaling filesystem, I won't have to wait for a check. However, I am well aware of the poor I/O performance of the SSD, so I can imagine using a journaling filesystem being even more frustrating, since there will be constant writes to the journal? I will buy a new laptop without such a crummy ssd someday but, is there anything I can do now, on the software side of things?

    Read the article

  • Server unresponsive after successful OpenSSL connection

    - by Dan B
    I'm testing server connections using OpenSSL, with varying results Server A: connection is successful, as are user login and the other commands I expected to work Server B: connection is successful, but the server is unresponsive when I try to submit a command. I don't get an error, or even a disconnection – just a blank line from where I hit Enter or ^M My hunch is that Server B's configuration requires a different character encoding or something and it's simply not recognizing my Enter keystroke, but I've looked to no avail... any suggestions would be appreciated!

    Read the article

  • Read-only file system RHEL

    - by gthm geeky
    I am using a RHEL 5.5 on my PC. I was playing around with chmod and chown. suddenly my home folder become read-only. all the folders in /home/goutham/, where goutham is username, became read-only. I can delete files after turning on system for few seconds, after that it says Permission denied:read only file system. I cant even create folder with sudo mkdir also. Please help me. My os is on /dev/sda3

    Read the article

  • some european CDNs?

    - by ajsie
    someone knows some CDN providers in Europe? cause i keep seeing big names like amazon, google, akamai, simplecdn, yahoo in all google searches. does it even matter where the CDN providers are located? cause apparently it won't matter cause their network grid is world wide:)

    Read the article

  • Are there any generic KVM over IP cards/chips for motherboards without any such capability?

    - by eek142
    I have a remote server that doesn't have any IP KVM capabilities, meaning I can't remotely power cycle it or access the BIOS. I saw that ASUS offers something for their motherboards here: http://www.asus.com/Server_Workstation/Accessories/ASMB5iKVM/ But is there anything like this available for other motherboards? Even something that I could stash away somewhere in a hard drive bay that simply plugs into the board would be great.

    Read the article

  • What do you use to store all of your personal data?

    - by codeflunky
    I have been on a quest for years to find the perfect tool to store all "my stuff". You know... personal information, code snippets, software keys, people's birthdays, whatever. There are lots of tools out there for this sort of thing, but I've never found any of them quite what I need. Ideally, I would just be able to type some notes, tag them (I don't like the idea of folder organization... too cumbersome) and then easily search and retrieve what I need later. It seems so simple, but for some reason I just can't find it. I currently use Backpack (sometimes), which is OK, but I hate the fact that you always have to create "pages" to store things. I don't want to have to do that. I want to just type some notes, tag it and save. That's it. And Backpack didn't even have search for a long time. What I do like about Backpack is that it's fast and it's web based. I've tried some desktop apps, which probably came closer to the functionality I want, but I just hate being tied to a single machine. I want to be able to get to my stuff anywhere, so the web based thing is a definite requirement. Anyway, I'm thinking about writing my own thing for this if I can't find anything, but before I make the attempt, I was wondering if anyone has any suggestions? I've used Backpack, Zoho Planner, Stikkit and Google Notes so far, and they are not quite to my liking. Anyone? (Sorry if this is off-topic, but I figured you guys might be legitimately into this kind of thing... you know, storing code snippets and such.) UPDATE: I've been using Evernote for a few days, and it is exactly what I've been looking for. It is totally tag based and allows both online and offline usage. The desktop app sits in your system tray and allows you to add whatever you want on the fly either as text notes or clippings from the browser. It also syncs it to the web (if you want) where you can get to it from anywhere using their web client. They even have a mobile client which I haven't used, but I will try it soon. Thanks again 18hrs. I wish I could give you 10 upvotes.

    Read the article

  • Trailing dots in url result in empty 404 page on IIS

    - by Peter Hahndorf
    I have an ASP.NET site on IIS8, but IIS7.5 behaves exactly the same. When I enter a URL like: mysite.com/foo/bar.. I get the following error with a '500 Internal Server Error' status code: even though I have custom error pages set up for 500 and 404 and I don't see anything wrong with my custom error page. In my web.config system.web node I have the following: <customErrors mode="On"> <error statusCode="404" redirect="/404.aspx" /> </customErrors> If I remove that section, I get a 404.0 response back but the page itself is blank. In web.config system.webServer I have: <httpErrors errorMode="DetailedLocalOnly"> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="404.html" responseMode="File" /> </httpErrors> But whether that is there or not, I get the same blank 404.0 page rather than my expected custom error page, or at least an internal IIS message. So first of all why is the asp.net handler picking up a request for '..' (also works with one or more trailing dots) If I remove the following handler from applicacationHost.config: <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" responseBufferLimit="0" /> I get my expected custom 404 page, but of course removing that handler breaks routing in asp.net among other things. Looking at the failure trace I see: Windows Authentication is disabled for the site, so why is that module even in the request pipeline? For now my fix is to use the URL Rewrite module with the following rule: <rewrite> <rules> <rule name="Trailing Dots" stopProcessing="true"> <match url="\.+$" /> <action type="Rewrite" url="/404.html" appendQueryString="false" /> </rule> </rules> </rewrite> This works okay, but I wonder why IIS/ASP.NET behaves this way?

    Read the article

  • OCZ Vertex 2 not recognized by Ubuntu installer

    - by Zsub
    As I boot into the Ubuntu 10.10 (or 11.04, doesn't matter) live environment or installer, it just refuses to recognise my Vertex 2. It reports the disk as ATA and not supporting smart, shows no serial number, and doesn't list the size correctly. All fdisk tells me is Unable to read /dev/sda (it's the only storage in the PC). I'm now running a temporary install of Windows 7 off of it, which worked like a charm, so where am I going wrong with Ubuntu... Specs: Asus M4N68T-M LE V2 (BIOS 0702, most recent) OCZ Vertex 2 SSD 60 GB Amd Athlon II X4 640 Patriot PSD34G13332 4GB DDR3 ram (two banks) EDIT I installed a second drive, installed Ubuntu on that and booted, it recognised the SSD just fine. I'm now trying to apt-get upgrade the live-environment. I wonder if there is any way to sort of install Ubuntu from Ubuntu (I boot into the working install on the other drive, install it on the SSD and then boot from the SSD). EDIT2 Ok, so that doesn't work. The install detects the SSD, however, it cannot format it. EDIT3 After a fresh boot I can read out SMART-data and even perform a read-benchmark, but if I try to format it, or do a write-bench, it'll crap out and after that it says SMART is not supported. So basically it seems I can't write to the disk, as it will stop working when I do, I will try to run repeated read-benchmarks to see if that has any effect. EDIT4 I'm running several read benchmarks on the drive right now, they give results that are to be expected from an SSD. If the read-benches don't fail, I can use fdisk on the disk, but it is now stuck trying to re-read the partition table after issueing the 'w' command. EDIT5 Parted Magic did recognize the drive and with hdparm -I even could tell me the drive was in a frozen state. I powercycled it (just pull out the plug from the SSD and plug it back in) and it wasn't frozen anymore. After that I could upgrade the firmware on the drive (still using Parted Magic) and format it to Ext4. After I rebooted into the Ubuntu installer, it wouldn't get recognized and hdparm didn't want to talk to it saying HDIO_DRIVE_CMD(identify) failed: Invalid exchange. EDIT6 For some reason if I enable one of the RAID controllers (the one the SSD is connected to, obviously) Ubuntu will let me format it, mount it and write to it. The installer also recognizes it. However if the raid controller is enabled but no array is defined the motherboard can't boot from it :(

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >