Search Results

Search found 25284 results on 1012 pages for 'test driven'.

Page 616/1012 | < Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >

  • The Correct Usage of DLLs with a DirectX Game?

    - by smoth190
    I'm using DirectX 10 (in C++) to make a game engine, and a test driver program on top of it. Now that I've written many messy rough drafts of an engine, I want to make the final (or sorta final) clean version. I choose to follow how I've seen other engines do it, and that's to have all the core nasty messy crap in a DLL, and then you can create games with just a few functions (well, not really :D). However, I'm unsure of what nasty messy crap to put in that DLL. I don't know about speed restrictions with DLLs. What I've done is put my winproc in the DLL, and have a class that takes the messages, and sends them through to the program using the DLL. Then that program does what it needs to do, and calls a rendering functions back in the DLL that renders everything. Only problem is it gets very low FPS (2, to be exact...). I've looked through everything, and I don't know if the way I'm using DLLs in causing this, or its something different. Whether it's the DLLs or not, I still want to know how to use a DLL correctly with a game engine. I like being neat, I hate having to see all those long names of DirectX classes. I use typedef a lot.

    Read the article

  • Puppet templates and undefined/nil variables

    - by larsks
    I often want to include default values in Puppet templates. I was hoping that given a class like this: class myclass ($a_variable=undef) { file { '/tmp/myfile': content => template('myclass/myfile.erb'), } } I could make a template like this: a_variable = <%= a_variable || "a default value" %> Unfortunately, undef in Puppet doesn't translate to a Ruby nil value in the context of the template, so this doesn't actually work. What is the canonical way of handling default values in Puppet templates? I can set the default value to an empty string and then use the empty? test... a variable = <%= a_variable.empty? ? "a default value" : a_variable %> ...but that seems a little clunky.

    Read the article

  • How do i enable deflate on apache2? debian

    - by acidzombie24
    I looked at the other answers and those solutions did not help. I am completely confused how to do this. I know almost nothing about apache nor how to use it and nearly all the article say write this but doesnt tell me where. So far i have done this # a2enmod deflate Module deflate already enabled Then i tried writing this in httpd.conf AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css Then i wrote a longer version which involved deflate.c. After those didnt work i deleted them all and wrote the AddOutputFilterByType line in apache2/sites-enabled/000-default inside of I used this to test http://www.whatsmyip.org/http_compression/ everytime it says no compression. I used another site but i could never compress. Also i restart the server everytime i save a file so what could be the problem and how do i enable this properly?

    Read the article

  • Why didn't cable select work?

    - by jldugger
    I got roped into doing tech support for a friend of the family. Obviously I'd already failed to hide my powers, ala Penny Arcade. Anyways, the guy bought a DVD burner OEM from Microcenter, and asked me to install it. So I stopped by before and thought I'd be slick and use Cable Select on the jumpers. I didn't get a chance to test it before it I had to leave, and it seems that this didn't work. I came back this week to investigate, and he explains he's confused how none of the software he downloaded was able to burn. So on a whim I switch it to explicit master / slave, and it starts working fine. Whoops. Well, at least it's not the extra crap he found and downloaded for free from the internet. Why doesn't setting both jumpers to Cable Select solve this?

    Read the article

  • XAMPP Closes the Connection and won't let me download anything

    - by Miro Markarian
    I want my XAMPP Apache server to host a zip file (The file is around 250mb) but the server closes the connection and won't let me download the file! However webistes are loading correctly and It seems that the problem is with the extension Does xampp/apache have any file extension limit that they won't let me download .zip and .exe files? Also tested with a smaller .exe file , the problem is still present. It just doesn't let me download any file from the server.!!! Here is the file link to check: Test All I get in the error log is this: Fri Sep 07 23:21:31.742625 2012] [authz_core:debug] [pid 3664:tid 396] mod_authz_core.c(808): [client x.x.x.x:23409] AH01628: authorization result: granted (no directives), referer: http://ammiprox.tk/greeneyes2910/

    Read the article

  • Cannot boot from a hard drive

    - by Martin Melka
    I have a problem booting from a hdd. I used to have it as my main drive before I bought an SSD, so I had been able to boot from it. But for some reason, now, half a year later, I can't get it to work. I completely erased it, deleting data and partitioning (using EASEUS Partition Master), then I installed Kubuntu (without changing anything in the installer), but it simply won't boot up. It always boots the drive with Windows and when I unplug this drive, it only gives me an error "PXE-E61: Media test failure, check cable", I guess it's trying to boot from LAN. I tried installing the system on a freshly deleted drive, without any other drives plugged in the pc, but the problems persist. This is how the drives look now (first one has Windows 7 installed, the second one Kubuntu): I am lost. I mean, after doing a fresh wipe and a clean install without altering anything, it should work. But it doesn't. What can be wrong here? Thanks

    Read the article

  • How to extract jpegs from a video file using ffmpeg

    - by Andrew Simpson
    I am using C# and ffmpeg. In this scenario I have 279 individual jpegs and i have used ffmpeg to create a AVI file from these images on my client. CMD Line: -f image2 -r 10 -i "C:\000EC902F17F\img%05d.jpg" -s 352x288 -y "C:\1\test.avi" I then upload to my server. CMD Line: -i c:\1\1.avi c:\1\img-%05d.jpg I then extract jpegs from the AVI file. I get 265 jpegs back. Obviously ffmpeg is dropping these frames (most probable) when the avi is 1st created. Is there a way to 'force' to encode using ALL the images I have? Thanks. PS I did not specify any command line option other than the size of the video output. As far as I am aware if none are specified then ffmpeg automatically chooses the best ones?

    Read the article

  • Problems executing check_mysql plugin

    - by Brian
    I'm attempting to test out the check_mysql plugin for Nagios using the command line. It works great when I'm just checking localhost but when I try to specify a different server (using the -H argument) I keep getting the following error message: CRITICAL - Unable to connect to mysql://nagios_user@{remoteServerHostname}/ - Access denied for user 'nagios_user'@'{localServerHostName}' (using password: YES) It looks like it's trying to connect to the database using the localhost server even though I've specified a different host name. I've already set up this user on the remote database and they have correct permissions. I'm just trying to figure out why the script keeps inserting the localhost server name instead of the remote one.

    Read the article

  • Windows XP , HP , IntelInside Trobelshooting

    - by Jamil Hneini
    My Windows XP PC has stopped booting. On start up, before Windows boots a screen appears saying: Windows Could Not Boot: Could Not Find or Corrupted file <Windows root>\Sytem32\Hal.dll Please Reinstall the file... From that I understand that I have to reinstall the OS but my computer is so old that I have lost the backup and recovery disk. My Computer's Info: 32Bit Computer Pentium 4 Windows XP Service Pack 2 HP History: My Computer has 4 hard drives 2 internal 40 GB drives (C And D) 2 external 8GB drives (E And F) Additional Info: I need the files inside C And E The computer reads my flash drive as a hard drive (8.45GB)(I have never booted while my USB pluged in while I was troubleshooting) With some test I discovered that the C drive wasn't detected by the computer I have already configured the boot order of BiOS I have Set The SATA to IDE The Question: I have a setup for windows 8 32 bit how can I boot from my USB to run that setup? If that isn't the proper way to handle this problem please tell me the proper way.

    Read the article

  • Why is LSHW Only listing one of my two HDDs?

    - by Mark
    I have a 120GB HDD and a 1TB HDD in the system. I installed Ubuntu Server using LVM on the 120GB. After installation I added the 1TB to the existing volume group and added 10GB to /home as a test. My understanding is that lshw is supposed to list hardware. What's the difference here? mark@server:~$ sudo lshw -short -c disk H/W path Device Class Description ========================================================= /0/100/1f.2/0 /dev/sda disk 120GB ST3120026AS /0/100/1f.2/1 /dev/cdrom disk DVD+-RW GH50N /0/1/0.0.0 /dev/sdc disk SCSI Disk /0/1/0.0.1 /dev/sdd disk SCSI Disk /0/1/0.0.2 /dev/sde disk SCSI Disk /0/1/0.0.3 /dev/sdf disk SCSI Disk The 1TB only shows up as a volume, not as a disk. mark@server:~$ sudo lshw -short .... /0/100/1f.2/0 /dev/sda disk 120GB ST3120026AS /0/100/1f.2/0/1 /dev/sda1 volume 476MiB Linux filesystem partition /0/100/1f.2/0/2 /dev/sda2 volume 111GiB Linux LVM Physical Volume partition /0/100/1f.2/1 /dev/cdrom disk DVD+-RW GH50N /0/100/1f.2/0.0.0 /dev/sdb volume 931GiB WDC WD1001FALS-0 Thanks, Mark

    Read the article

  • Working with Reporting Services Filters–Part 5: OR Logic

    - by smisner
    When you combine multiple filters, Reporting Services uses AND logic. Once upon a time, there was actually a drop-down list for selecting AND or OR between filters which was very confusing to people because often it was grayed out. Now that selection is gone, but no matter. It wouldn’t help us solve the problem that I want to describe today. As with many problems, Reporting Services gives us more than one way to apply OR logic in a filter. If I want a filter to include this value OR that value for the same field, one approach is to set up the filter is to use the IN operator as I explained in Part 1 of this series. But what if I want to base the filter on two different fields? I  need a different solution. Using the AdventureWorksDW2008R2 database, I have a report that lists product sales: Let’s say that I want to filter this report to show only products that are Bikes (a category) OR products for which sales were greater than $1,000 in a year. If I set up the filter like this: Expression Data Type Operator Value [Category] Text = Bikes [SalesAmount]   > 1000 Then AND logic is used which means that both conditions must be true. That’s not the result I want. Instead, I need to set up the filter like this: Expression Data Type Operator Value =Fields!EnglishProductCategoryName.Value = "Bikes" OR Fields!SalesAmount.Value > 1000 Boolean = =True The OR logic needs to be part of the expression so that it can return a Boolean value that we test against the Value. Notice that I have used =True rather than True for the value. The filtered report appears below. Any non-bike product appears only if the total sales exceed $1,000, whereas Bikes appear regardless of sales. (You can’t see it in this screenshot, but Mountain-400-W Silver, 38 has sales of $923 in 2007 but gets included because it is in the Bikes category.)

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • htaccess order Deny,Allow rule

    - by aspiringCodeArtisan
    I'd like to dynamically add IPs to a block list via htaccess. I was hoping someone could tell me if the following will work in my case (I'm unsure how to test via localhost). My .htaccess file will have the following by default: order allow,deny allow from all IPs will be dynamically appended: Order Deny,Allow Allow from all Deny from 192.168.30.1 The way I understand this is that it is by default allow all with the optional list of deny rules. If I'm not mistaken Order Deny,Allow will look at the Deny list first, is this correct? And does the Allow from all rule need to be at the end?

    Read the article

  • How can I refresh/reinstall/clear/set-to-default my bootup process?

    - by Tchalvak
    I'm currently having a problem with my bootup process that is growing progressively worse as time goes on: While booting, it does a few minutes of hard-drive reading. During that, instead of showing a boot splash screen, it shows various dashes and dots, as if the video card isn't recognizing. The splash screen actually has colors similar to the splash screen (purple), it simply is garbled. It then does a few minutes of hard-drive reads, and if I leave it long enough, sometimes it boots into the desktop (and auto-logs-in). Sometimes, unfortunately, it just hangs on that garbled screen and reads from the hard-drive forever. Notably, I've also stopped being able to access grub during bootup (perhaps it is just not displayed correctly by the video, hard to tell). This is a symptom that has grown over the course of various ubuntu upgrades, at least I suspect that the upgrade process is leaving behind cruft. So, is there a safe way for me to "refresh" the boot system so that it is clean, new, fast, and reliable? For example, to test out a cleanly configured boot, make sure that it works (try before I buy), and then apply it to the system to eliminate as much of this problem as possible? Edit: Here is the requested bootchart: http://imgur.com/9jocF

    Read the article

  • SQL server 2000 reporting bad values to ASP.Net Application

    - by Ben
    I have an instance of SQL server 2000 (8.0.2039) with a rather simple table. We recently had users complain about an application I wrote returning bad values for some of the dates in the databse. When I query the table directly via Server Management Studio, it will return the correct values, however the identical queries from my application report the wrong values, but only for a couple of dates. I have been over the code, and it is solid. If the error was in the code, all of the dates reported should be wrong. I have also run the code on an identical test database, and everything is reported properly. I believe the problem may lie in the sql instance itself, which is why I am posting in Server Fault. My question is, has anyone heard of a database reporting bad (incorrect) date values when queried via web application? It should be noted that this particular server was once manually rebuilt after having a cluster clean run on it.

    Read the article

  • Having a Proactive Patch Plan is the way to Go!

    - by user793553
    BUILDING A SUCCESSFUL PATCHING STRATEGY Make Patching Easy! Having a Patching Strategy for your E-Business Suite system is a great way to manage your system downtime, identify the proper resources needed to perform the necessary task and familiarizing yourself with the Patching Tools in EBS. Having a Proactive Patch Plan is the way to Go! Proactive Patching is a preventive measure allowing you to have a complete patching strategy when applying patches periodically. Oracle provides several tools to help you get started to set the foundation for a solid and proactive patching strategy in Note 313.1 - "Patching & Maintenance Advisor: E-Business Suite 11i and R12". It details all the steps and tooling available for the patching strategy along with the benefits. Among other things it covers the following: How to plan ahead for system downtime Patching Tools in E-Business Suite (Autopatch, OUI, OPatch) How to Identify Patches (RUPs, EBS Family Packs, Critical Patch Updates, etc) How to properly test your patching plan and move to Production Make sure you visit the New E-Business Patching Community! We encourage you to access the "E-Business Patching Community" prior to applying an E-Business Suite patch. Doing so will allow you to explore perspectives shared by industry peers, get real-world experiences with the patch, and benefit from known solutions and lessons learned. Additionally, Oracle Support engineers monitor discussion topics to help provide guidance and solutions for your E-Business Suite patching needs. This is a valuable opportunity to "Get Proactive" with the patching and maintenance of your E-Business Suite environment. Start now, and find fast, proactive resolutions before you begin. Related Articles: What's the Best Way to Patch an E-Business Suite Environment? Patch Wizard Utility

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • Motherboard rejects identical hard drive, one works the other doesn't

    - by Payson Welch
    I have an interesting situation. I have a Dell XS23-SB server that has four blades in it. The blades use Supermicro X7DWT motherboards, and interface with the sata drives through a backplane. I took two identical drives from a raid 0 enclosure that came from the factory (GDrive), one works on all four servers, the other does not. I verified that they both work by plugging them into a hard drive cradle. This behavior can be repeated with other drives, some drives work and some don't. However when i test them, they ALL work on my PC. What could cause this?

    Read the article

  • iptables: built-in INPUT chain in nat table?

    - by ughmandaem
    I have a Gentoo Linux system running linux 2.6.38-rc8. I also have a machine running Ubuntu with linux 2.6.35-27. I also have a virtual machine running Debian Unstable with linux 2.6.37-2. On the Gentoo and Debian systems I have an INPUT chain built into my nat table in addition to PREROUTING, OUTPUT, and POSTROUTING. On Ubuntu, I only have PREROUTING, OUTPUT, and POSTROUTING. I am able to use this INPUT chain to use SNAT to modify the source of a packet that is destined to the local machine (imagine simulating an incoming spoofed IP to a local application or just to test a virtual host configuration). This is possible with 2 firewall rules on Gentoo and Debian but seemingly not so on Ubuntu. I looked around for documentation on changes to the SNAT target and the INPUT chain of the nat table and I couldn't find anything. Does anyone know if this is a configuration issue or is it something that was just added in more recent versions of linux?

    Read the article

  • Testing UDP port connectivity

    - by Lock
    I am trying to test whether I can get to a particular port on a remote server (both of which I have access to) through UDP. Both servers are internet facing. I am using netcat to have a certain port listening. I then use nmap to check for that port to see if it is open, but it doesn't appear to be. Iptables is turned off. Any suggestions why this could be? I am eventually going to setup a VPN tunnel, but because I'm very new to tunnels, I want to make sure I have connectivity on port UDP 1194 before advancing.

    Read the article

  • Windows 2003 SMTP virtual server, why emails are not delivered?

    - by bardan
    Configured Windows 2003 as my email client, everything works fine with POP3 (i'm able to recieve emails), the problem is with SMTP and i can't figure out how to find where excatly this problem is, because email looks like it is sent, but recipients don't recieve anything... i had some problems with relying, but fixed everything, and now i configured outlook express on the same machine, trying to send emails and it looks everything fine, email goes to SENT folder, no errors, but recipients (tried several diffrent) don't revieve any letters... tried to test from the same machine with telnet like it described there http://support.microsoft.com/kb/153119 ant everything looks ok...

    Read the article

  • Network connectivity issues with Windows Store

    - by Duy Tran
    I have my Windows 8 Pro build 9200 installed on my Dell laptop. I want to install some new apps and updates from the Store but there might be some network problem that caused the downloading gauge showing up but not really running at all. I followed some instructions that switched from local user to my Microsoft account, but this "Please wait" screen keeps showing and I don't really know why. I still have internet access and can use some apps like People, Mail, etc. with my account logged in, I can surf the net using Firefox, Chrome and Internet Explorer. I did another test using cmd with ping -t google.com and it showed that my laptop has internet access. Anybody knows a solution to make the Store working properly? Or is there any workaround to switch to the Microsoft account instead of a local user account?

    Read the article

  • Ubuntu sound volume is 40% lower than in Windows

    - by ncomx
    I have 2x 2.1 speakers connected to the computer where I have Ubuntu 12.04 installed. On the software side I've set all the volume controls to 100% with the alsamixer program. The speakers have their own volume control, maintaining those at the same level, and switching between Ubuntu and Windows (XP and 7), on windows the output volume is at least 40% higher, even when having the windows volume control at 50% (without touching the speakers volume control) it's still much higher than the sound on Ubuntu. Why can this be happening? Are there some alternative sound drivers (other than the default ones) I could test to see if it makes a difference? some info about the card: root:$ cat /proc/asound/cards 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0xfbff4000 irq 55 1 [Generic ]: HDA-Intel - HD-Audio Generic HD-Audio Generic at 0xfbcfc000 irq 56 root:$ lspci | grep -i audio 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 02:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Cayman/Antilles HDMI Audio [Radeon HD 6900 Series] I think the one i am using is the Intel one, the other seems to be from the vga card which is an ati radeon 6950. Running gstreamer-properties and switching between alsa, oss, ossv4 and pulseaudio doesn't seem to make any difference.

    Read the article

  • Altq limits not being applied to UDP transfers

    - by overkordbaever
    I have a OpenBSD server acting as a router/firewall with yhr packet filter ruleset shown below, a linux server, and a linux client. When transferring files (using netcat) by TCP, the limits are applied (for example the 100mbit limit in the example), though when transferring data by UDP, the limits aren't applied; the file always takes the same amount of time no matter the queue bandwidth limit I set (I can even turn off the queues completely, and will still get the same result). Why aren't the queuing rules applied to UDP packages? The rules used: #queue rules altq on { $int_if, $ext_if } cbq bandwidth 100Mb queue { def, low } queue def bandwidth 0Mb cbq(default) queue low bandwidth 100Mb cbq #Passrules test pass out quick from $int_if to $ext_if queue low pass in quick from $ext_if to $int_if queue low pass out quick from $ext_if to $int_if queue low pass in quick from $int_if to $ext_if queue low I suppose this may be related a question I've previously asked, though since it's more of a separate question, I suppose a separate question should be used for this

    Read the article

  • What is the simplest and fastest way to transfer large file through a Windows network?

    - by Sake
    I have a Window Server 2000 machine running MS SQL Server that stores over 20GB of data. The database is backed-up every day to the second harddrive. I want to transfer those backup files to another computer to build another test server and for recovery practicing. (the backup never actually got restored for almost 5 years. Don't tell my boss about that!) I have trouble transfering that huge file through the network. I've tried plain network copy, apache download, and ftp. Any method I tried end up failing when the amount of data transfered reach 2GB. The last time that I successfully transfered the file, it was through a usb attached external harddrive. But I want to perform this task routinely and preferably automatically. Wonder what is the most pragmatic approach for this situation ?

    Read the article

< Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >