Search Results

Search found 28279 results on 1132 pages for 'syntax case'.

Page 190/1132 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Apache 2 Symbolic link not allowed or link target not accessible

    - by djechelon
    While the title of this question matches an already asked question, in my case I already set Options +FollowSymLinks. The setup is the following: my hosting setup includes htdocs/ directory that is the default document root for HTTP websites and htdocs-secure that is for HTTPS. They are meant for sites that need a different HTTPS version. In case both share the same files I create a link from htdocs-secure to htdocs by ln -s htdocs htdocs-secure but here comes the problem! Log still says Symbolic link not allowed or link target not accessible: /path/to/htdocs-secure Vhost fragment Header always set Strict-Transport-Security "max-age=500" DocumentRoot /path/to/htdocs-secure <Directory "/path/to/htdocs-secure"> allow from all Options +FollowSymLinks </Directory> I think it's a correct setup. The HTTP version of the site is accessible, so it doesn't look like a permission problem. How to fix this? [Add] other info: I use MPM-itk and I set AssignUserId to the owner/group of both the directories

    Read the article

  • (simple) linux HA with vmware vsphere?

    - by derhelge
    I hope my upcoming question is specific enough, and you are able and willing to support :-) We have several openSUSE VMs in an ESX-Cluster (three ESX-Servers) with an attached iSCSI-SAN. All of those Linux VMs are "single point of failure"-configured, which means in the case of a Web-Server: LAMP, storage, etc. everything on this machine. This was very simple and in case of a failure (in the last years: kernel panics or apache crashes) a simple reboot triggered by a script did it. But the problem is: How to upgrade/maintain the w(eb-)application or the underlying OS without downtime? This wasn't really managable and i did this in the early morning ;) How can i achieve a "simple" High-Availability Cluster now? I thought of: DRBD with heartbeat with 2 VMs. And for the storage a RDM (raw device mapped) LUN and change the read-write-permissions for both VMs. Is this a good idea? Anyone has a better solution?

    Read the article

  • How to forward UDP Wake-on-Lan port to broadcast IP with IPTABLES?

    - by Nazgulled
    I'm trying to setup Wake-on-Lan for some of the LAN computers at home and it seems that I need to open a UDP port (7 or 9 being the most common) and forward all requests to the broadcast IP, which in my case is 192.168.1.255. The problem is that my router does not allow me to forward anything to the broadcast IP. I can connect to my router through telnet and it seems this router uses IPTABLES, but I don't know much about it or how to is. Can someone help me out with the proper iptables commands to do what I want? Also, in case it doesn't work, the commands to put everything back would be nice too. One last thing, rebooting the router will keep those manually added iptables entries or I would need to run them every time?

    Read the article

  • How can I move along an angled collision at a constant speed?

    - by Raven Dreamer
    I have, for all intents and purposes, a Triangle class that objects in my scene can collide with (In actuality, the right side of a parallelogram). My collision detection and resolution code works fine for the purposes of preventing a gameobject from entering into the space of the Triangle, instead directing the movement along the edge. The trouble is, the maximum speed along the x and y axis is not equivalent in my game, and moving along the Y axis (up or down) should take twice as long as an equivalent distance along the X axis (left or right). Unfortunately, these speeds apply to the collision resolution too, and movement along the blue path above progresses twice as fast. What can I do in my collision resolution to make sure that the speedlimit for Y axis movement is obeyed in the latter case? Collision Resolution for this case below (vecInput and velocity are the position and velocity vectors of the game object): // y = mx+c lowY = 2*vecInput.x + parag.rightYIntercept ; ... else { // y = mx+c // vecInput.y = 2(x) + RightYIntercept // (vecInput.y - RightYIntercept) / 2 = x; //if velocity.Y (positive) greater than velocity.X (negative) //pushing from bottom, so push right. if(velocity.y > -1*velocity.x) { vecInput = new Vector2((vecInput.y - parag.rightYIntercept)/2, vecInput.y); Debug.Log("adjusted rightwards"); } else { vecInput = new Vector2( vecInput.x, lowY); Debug.Log("adjusted downwards"); } }

    Read the article

  • How to deal with colleagues refuse to follow practices?

    - by Adrian Shum
    I was discussing with another colleague about what we should be used when an DB entity is referring to another. I don't think there is any good reason to break the practice of putting the Primary Key in the referring entity. However, one of my colleague says: "You should use a surrogate key in the entity, but it is better to put the human-readable natural key in the referring entity. As long it is unique, it is fine and it is easier when you are doing support or maintenance job" I know it will works, but obviously it is not a good practice you are putting a non-PK unique column as "foreign key", just for gaining a bit of ease in writing SQL during support as we can have less table join. Though I mentioned the his approach is conceptual incorrect, and causing problem too practically etc, he seems rather trade off correctness in data model in exchange of ease of maintenance. And he said: "I know it is not good practice, but good practice is not golden rule" Honestly I feel frustrated when dealing with something like this. I know there are always case that we should break some rule or practice, but doubtless it is not such case now. What will you when you are facing situation like this? Please assume yourself being a senior developer which is expected to contribute in misc development direction and convention.

    Read the article

  • Windows 7 CRC Error When Installing Fallout 3 [closed]

    - by c00lryguy
    Earlier today, I installed Fallout 3 on Windows XP perfectly fine. Then about 2 hours ago I installed Windows 7 and I would like to install Fallout 3. But, when I try to install Fallout 3 on Win 7, I get an error while in the middle of the install: CRC Error: The File C:\Program Files\Bethesda Softworks\Fallout 3\Data\Video\B03.bik doesn't match the file in the setup's.cab file I forget the filename but it is the same each time I install. The disk literally went from the DVD-Rom to the case after the first install and straight from the case to the DVD-Rom. It's in perfect condition. My DVD-Rom is only about 2 months old and I've never had any problems with it. I don't understand what's going on. My user that I'm installing the game with is set as Administrator, as well.

    Read the article

  • Catch Me If You Can

    - by Knut Vatsendvik
    Suppose you have a Proxy based Web Service using Oracle Service Bus. In a stage in the request pipeline,  you are using a Publish action to publish the incoming message to a JMS queue using a Business Service. What if the outbound transport provider throws an exception (outside of your pipeline)? Is your pipeline able to catch the error with an error handler?? This situation could occur because of a faulty connection, suspended queue, or some other reason. Here is the Request Pipeline in our simple test case. With an Error Handler added to the message flow containing a simple Log action. By default, the Publish action will invoke the service in a fire and forget fashion. Therefore any exception that occurs in the outbound transport will go unnoticed as shown in the following Invocation Trace. So what now? In a message flow, you can apply a Routing Options action to modify any or all of the following properties in the outbound request: URI, Quality of Service, Mode, Retry parameters, Message Priority. Now add the Routing Options action to the Request Action as shown below. Click the Routing Options to display its properties in the Properties View. Select the QoS option to set the Quality of Service element. Select Exactly Once to override the default setting, and Republish the project. The invocation will now block until the message is completely processed. Trying the same test case as earlier generates the following Invocation Trace showing that the Error Handler is now triggered.

    Read the article

  • How to execute a shell script on startup?

    - by vijay.shad
    I have create a script to start a server(my first question). Now I want it to run on the system boot and start the defined server. What should I do to get this done? My findings tell me put this file in /etc/init.d location and it will execute when the system will boot. But I am not able to understand how the first argument on the startup will be start? Is this predefined somewhere to use start as $1? If I want to have a case startall that will start all the servers in the script, then what are the options I can manage. My Script is like this: #!/bin/bash case "$1" in start) start ;; stop) stop ;; restart) $0 stop $0 start ;; *) echo "usage: $0 (start|stop|restart)" ;; esac

    Read the article

  • how do web hosting companies host end users domain and give so many public IPs

    - by Registered User
    Hi, I am a Computer Science guy who understands networking very well. But when it comes to Web hosting companies I am clue less. I want to know how do web hosting companies give so many public IPs to so many users and each of them has root login also. How this is technically done that is what I am interested to know. I do not know how you people configure it. In my case if I have to do I will buy a public IP from some one and connect my server to it and at max give some people SSH access to it.In case of Web hosting companies how is it done.

    Read the article

  • SQL SERVER – ORDER BY ColumnName vs ORDER BY ColumnNumber

    - by pinaldave
    I strongly favor ORDER BY ColumnName. I read one of the blog post where blogger compared the performance of the two SELECT statement and come to conclusion that ColumnNumber has no harm to use it. Let us understand the point made by first that there is no performance difference. Run following two scripts together: USE AdventureWorks GO -- ColumnName (Recommended) SELECT * FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT * FROM HumanResources.Department ORDER BY 3,2 GO If you look at the result and see the execution plan you will see that both of the query will take the same amount of the time. However, this was not the point of this blog post. It is not good enough to stop here. We need to understand the advantages and disadvantages of both the methods. Case 1: When Not Using * and Columns are Re-ordered USE AdventureWorks GO -- ColumnName (Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY 3,2 GO Case 2: When someone changes the schema of the table affecting column order I will let you recreate the example for the same. If your development server where your schema is different than the production server, if you use ColumnNumber, you will get different results on the production server. Summary: When you develop the query it may not be issue but as time passes by and new columns are added to the SELECT statement or original table is re-ordered if you have used ColumnNumber it may possible that your query will start giving you unexpected results and incorrect ORDER BY. One should note that the usage of ORDER BY ColumnName vs ORDER BY ColumnNumber should not be done based on performance but usability and scalability. It is always recommended to use proper ORDER BY clause with ColumnName to avoid any confusion. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What's so difficult about SVN merges? [closed]

    - by Mason Wheeler
    Possible Duplicate: I’m a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS? Every once in a while, you hear someone saying that distributed version control (Git, HG) is inherently better than centralized version control (like SVN) because merging is difficult and painful in SVN. The thing is, I've never had any trouble with merging in SVN, and since you only ever hear that claim being made by DVCS advocates, and not by actual SVN users, it tends to remind me of those obnoxious commercials on TV where they try to sell you something you don't need by having bumbling actors pretend that the thing you already have and works just fine is incredibly difficult to use. And the use case that's invariably brought up is re-merging a branch, which again reminds me of those strawman product advertisements; if you know what you're doing, you shouldn't (and shouldn't ever have to) re-merge a branch in the first place. (Of course it's difficult to do when you're doing something fundamentally wrong and silly!) So, discounting the ridiculous strawman use case, what is there in SVN merging that is inherently more difficult than merging in a DVCS system?

    Read the article

  • How can I move a polygon edge 1 unit away from the center?

    - by Stephen
    Let's say I have a polygon class that is represented by a list of vector classes as vertices, like so: var Vector = function(x, y) { this.x = x; this.y = y; }, Polygon = function(vectors) { this.vertices = vectors; }; Now I make a polygon (in this case, a square) like so: var poly = new Polygon([ new Vector(2, 2), new Vector(5, 2), new Vector(5, 5), new Vector(2, 5) ]); So, the top edge would be [poly.vertices[0], poly.vertices[1]]. I need to stretch this polygon by moving each edge away from the center of the polygon by one unit, along that edge's normal. The following example shows the first edge, the top, moved one unit up: The final polygon should look like this new one: var finalPoly = new Polygon([ new Vector(1, 1), new Vector(6, 1), new Vector(6, 6), new Vector(1, 6) ]); It is important that I iterate, moving one edge at a time, because I will be doing some collision tests after moving each edge. Here is what I tried so far (simplified for clarity), which fails triumphantly: for(var i = 0; i < vertices.length; i++) { var a = vertices[i], b = vertices[i + 1] || vertices[0]; // in case of final vertex var ax = a.x, ay = a.y, bx = b.x, by = b.y; // get some new perpendicular vectors var a2 = new Vector(-ay, ax), b2 = new Vector(-by, bx); // make into unit vectors a2.convertToUnitVector(); b2.convertToUnitVector(); // add the new vectors to the original ones a.add(a2); b.add(b2); // the rest of the code, collision tests, etc. } This makes my polygon start slowly rotating and sliding to the left, instead of what I need. Finally, the example shows a square, but the polygons in question could be anything. They will always be convex, and always with vertices in clockwise order.

    Read the article

  • Back up a single table in SQL Server

    - by BuckWoody
    SQL Server doesn’t have an easy way to take a table backup, so I often use the bcp (Bulk Copy Program) to accomplish the same goal. I’ve mentioned this before, and someone told me when they tried it they couldn’t restore the table – ah the dangers of telling people half the information! I should have mentioned that you need to have a “format file” ready if the table does not exist at the destination. In my case I already had the table, in this person’s case they did not. The format file can be used to rebuild that table structure before the data is bcp’d in, and you can read more about it here: http://msdn.microsoft.com/en-us/library/ms191516.aspx There’s another way to back up a table, and that’s to create a Filegroup and place the table there. Then you can take a Filegroup backup to back up a single table. Of course, there are other methods of moving a single table’s data in an out, including SQL Server Integration Services and even the older Data Transformation Services, or simply by using hte SQLCMD or PowerShell utilities to run a query and just save the output to a file. In fact, these days I’m using a PowerShell script to build INSERT statements from that query. That could also easily be modified to create the table structure (or modify one if needed) quite easily. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • need explanation on amortization in algorithm

    - by Pradeep
    I am a learning algorithm analysis and came across a analysis tool for understanding the running time of an algorithm with widely varying performance which is called as amortization. The autor quotes " An array with upper bound of n elements, with a fixed bound N, on it size. Operation clear takes O(n) time, since we should dereference all the elements in the array in order to really empty it. " The above statement is clear and valid. Now consider the next content: "Now consider a series of n operations on an initially empty array. if we take the worst case viewpoint, the running time is O(n^2), since the worst case of a sigle clear operation in the series is O(n) and there may be as many as O(n) clear operations in the series." From the above statement how is the time complexity O(n^2)? I did not understand the logic behind it. if 'n' operations are performed how is it O(n ^2)? Please explain what the autor is trying to convey..

    Read the article

  • Permission based Authorization vs. Role based Authorization - Best Practices - 11g

    - by Prakash Yamuna
    In previous blog posts here and here I have alluded to the support in OWSM for Permission based authorization and Role based authorization support. Recently I was having a conversation with an internal team in Oracle looking to use OWSM for their Web Services security needs and one of the topics was around - When to use permission based authorization vs. role based authorization? As in most scenarios the answer is it depends! There are trade-offs involved in using the two approaches and you need to understand the trade-offs and you need to understand which trade-offs are better for your scenario. Role based Authorization: Simple to use. Just create a new custom OWSM policy and specify the role in the policy (using EM Fusion Middleware Control). Inconsistent if you have multiple type of resources in an application (ex: EJBs, Web Apps, Web Services) - ex: the model for securing EJBs with roles or the model for securing Web App roles - is inconsistent. Since the model is inconsistent, tooling is also fairly inconsistent. Achieving this use-case using JDeveloper is slightly complex - since JDeveloper does not directly support creating OWSM custom policies. Permission based Authorization: More complex. You need to attach both an OWSM policy and create OPSS Permission authorization policies. (Note: OWSM leverages OPSS Permission based Authorization support). More appropriate if you have multiple type of resources in an application (ex: EJBs, Web Apps, Web Services) and want a consistent authorization model. Consistent Tooling for managing authorization across different resources (ex: EM Fusion Middleware Control). Better Lifecycle support in terms of T2P, etc. Achieving this use-case using JDeveloper is slightly complex - since JDeveloper does not directly support creating/editing OPSS Permission based authorization policies.

    Read the article

  • Variable naming conventions?

    - by Ziv
    I've just started using ReSharper (for C#) and I kind of like its code smells finder, it shows me some things about my writing that I meant to fix a long time ago (mainly variable naming conventions). It caused me to reconsider some of my naming conventions for methods and instance variables. ReSharper suggests that instance variable be lower camel case and begin with an underscore. For a while I meant to make all my local variables lower camel case but is the underscore necessary? Do you find it comfortable? I don't like this convention but I also haven't tried it yet, what is you opinion of it? The second thing it prompted me to re-evaluate is my naming conventions for GUI event handlers. I usually use the VS standard of ControlName_Action and my controls usually use hungarian notation (as a suffix, to help clarify in code what is visible to the user and what isn't when dealing with similarly named variable) so I end up with OK_btn_Click(), what is your opinion of that? Should I succumb to the ReSharper convention or there are other equally valid options?

    Read the article

  • Email notification and mail server

    - by Jerr Wu
    I am building a web application with email notification just like Facebook, which will host in http://www.linode.com/. When a user A comment to a post, the poster will get an email notification from '[email protected]' with the comment message written by user A. (Not spam) I really like Google Apps but they have sending limits 2000 sending per day, that is not suit for my case becuz I cannot have sending limits. There will be many email notifications. http://support.google.com/a/bin/answer.py?hl=en&answer=166852 I also need company email accounts for team members use which I prefer Google Apps. My web application will host in linode, I am considering "Amazon Simple Notification Service" for the email notification. My questions are Any other recommend email service provider suits my case for me? Can I bind company email accounts(ex: [email protected]) with Google Apps and bind [email protected] with other email service provider?

    Read the article

  • Why does bash invocation differ on AIX when using telnet vs ssh

    - by Philbert
    I am using an AIX 5.3 server with a .bashrc file set up to echo "Executing bashrc." When I log in to the server using ssh and run: bash -c ls I get: Executing bashrc . .. etc.... However, when I log in with telnet as the same user and run the same command I get: . .. etc.... Clearly in the telnet case, the .bashrc was not invoked. As near as I can tell this is the correct behaviour given that the shell is non-interactive in both cases (it is invoked with -c). However, the ssh case seems to be invoking the shell as interactive. It does not appear to be invoking the .profile, so it is not creating a login shell. I cannot see anything obviously different between the environments in the two cases. What could be causing the difference in bash behaviour?

    Read the article

  • Big project layout : adding new feature on multiple sub-projects

    - by Shiplu
    I want to know how to manage a big project with many components with version control management system. In my current project there are 4 major parts. Web Server Admin console Platform. The web and server part uses 2 libraries that I wrote. In total there are 5 git repositories and 1 mercurial repository. The project build script is in Platform repository. It automates the whole building process. The problem is when I add a new feature that affects multiple components I have to create branch for each of the affected repo. Implement the feature. Merge it back. My gut feeling is "something is wrong". So should I create a single repo and put all the components there? I think branching will be easier in that case. Or I just do what I am doing right now. In that case how do I solve this problem of creating branch on each repository?

    Read the article

  • Transient VO : Powerful J2EE Design Pattern

    - by Vijay Mohan
    We had a use-case wherein, the communication has to happen between regions residing under differenet taskfows. Essentially, they had a common set of parameters to be used. Initially, we resorted to the  use of pageFlowScope variables, but they are tightly coupled with the individual task flows. So, how the communication has to happen..?Some of the alternatives that we brainstormed into are - 1.usage of adf contextual event - This is a powerful feature indeed for such use-cases, but there is a considerable cost involved with it. So, before resorting to it, you have to make sure that you have good enough reason to use it.It actually does a server roundtrip and also the issue of an event and listening part to it is also something which requires your attention !!2.Use a transientVO with shared data control scope - with shared data control scope, the transient VO rows would be persistent across the task flows in your application. All you have to do is to create the attributes in the transientVO(prefereably with the same names - for the ease of conversion) and create some utility methods in VOImpl for creating row, updating row and deleting a row. You also have to make sure that the vo row is initialized per http request( this you can do in a bookmark method of your index.jspx - residing in adfc-config.xml), else the ui fields binded to the transient vo attributes won't render in UI.Hope, this helps and this should be a common use-case across apps.

    Read the article

  • Are More Comments Better in High-Turnover Environments?

    - by joshin4colours
    I was talking with a colleague today. We work on code for two different projects. In my case, I'm the only person working on my code; in her case, multiple people work on the same codebase, including co-op students who come and go fairly regularly (between every 8-12 months). She said that she is liberal with her comments, putting them all over the place. Her reasoning is that it helps her remember where things are and what things do since much of the code wasn't written by her and could be changed by someone other than her. Meanwhile, I try to minimize the comments in my code, putting them in only in places with a unobvious workaround or bug. However, I have a better understanding of my code overall, and have more direct control over it. My opinion in that comments should be minimal and the code should tell most of the story, but her reasoning makes sense too. Are there any flaws in her reasoning? It may clutter the code but it ultimately could be quite helpful if there are many people working on it in the short- to medium-run.

    Read the article

  • OWB/ODI Users: Last Chance to Submit and Vote On Sessions for OpenWorld 2010

    - by antonio romero
    Now is the last chance for OWB and ODI users to propose new ETL/DW/DI sessions for OpenWorld! Oracle OpenWorld 2010 "Suggest a Session" lets members of the Oracle Mix community submit and vote on papers/talks for OpenWorld. The most popular session proposals will be included in the conference program. One promising OWB-related topic has already been submitted: Case Study: Real-Time Data Warehousing and Fraud Detection with Oracle 11gR2 Dr. Holger Friedrich and consultants from sumIT AG in Switzerland built a real-time data warehouse and accompanying BI system for real-time online fraud detection with very limited resources and a short schedule. His presentation will cover: How sumIT AG efficiently loads complex data feeds in real time in Oracle 11gR2 using, among others, Advanced Queues and XML DB How they lowered costs and sped up development, by leveraging the DBs development features including Oracle Warehouse Builder How they delivered a production-ready solution in a few short months using only three part-time developers Come vote for this proposal, on Oracle Mix: https://mix.oracle.com/oow10/proposals/10566-case-study-real-time-data-warehousing-and-fraud-detection-with-oracle-11gr2  I have already invited members of the OWB/ODI Linkedin group (with over 1400 members) to come vote on topics like this one and propose their own. If enough of us vote on a few topics, we are sure to get some on the agenda!  And if you have your own topics, using the Suggest-a-Session instructions here: http://wiki.oracle.com/page/Oracle+OpenWorld+2010+Suggest-a-Session If you propose a topic, don't forget to come to Linkedin and promote it! I have already sent the members of the Linkedin group an email announcement about this, and I will send another in a week, with links to all topics submitted. Thanks, all!

    Read the article

  • If-Modified-Since vs If-None-Match

    - by Roger
    This question is based on this article response header HTTP/1.1 200 OK Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT ETag: "10c24bc-4ab-457e1c1f" Content-Length: 12195 request header GET /i/yahoo.gif HTTP/1.1 Host: us.yimg.com If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT If-None-Match: "10c24bc-4ab-457e1c1f" HTTP/1.1 304 Not Modified In this case browser is sending both If-None-Match and If-Modified-Since. My question is on the server side do I need to match BOTH etag and If-Modified-Since before I send 304. Or Should I just look at etag and send 304 if etag is a match. In this case I am ignoring If-Modified-Since .

    Read the article

  • Installing Windows 8 over Windows 7 with Ubuntu installed using wubi (both on `C:\`)

    - by peat-ar
    Current state I'm using both - Ubuntu (installed via Wubi on the same drive as Windows) and Windows 7 - quite frequently. I just bought the upgrade to Windows 8 and was curious to try it out, however I'm quite insecure whether Windows 8's "secure boot" will exclude my current Ubuntu installation and if it's even possible to keep it. So... is there any way to upgrade to Windows 8 without overwriting Ubuntu? (I really don't want to reinstall it, as a lot of customization has been done here and taking backups and all would get pretty wearing (same case for Windows 7 - if possible, I'd like to keep my files)) This is not a dublicate of Installing Windows 8 over Windows 7 with Ubuntu installed using wubi? because this question only deals with the case when Ubuntu has been installed on (e.g.) D:\ (while Windows is being installed on C:\)

    Read the article

  • optimizing file share performance on Win2k8?

    - by Kirk Marple
    We have a case where we're accessing a RAID array (drive E:) on a Windows Server 2008 SP2 x86 box. (Recently installed, nothing other than SQL Server 2005 on the server.) In one scenario, when directly accessing it (E:\folder\file.xxx) we get 45MBps throughput to a video file. If we access the same file on the same array, but through UNC path (\server\folder\file.xxx) we get about 23MBps throughput with the exact same test. Obviously the second test is going through more layers of the stack, but that's a major performance hit. What tuning should we be looking at for making the UNC path be closer in performance to the direct access case? Thanks, Kirk (corrected: it is CIFS not SMB, but generalized title to 'file share'.) (additional info: this happens during the read from a single file, not an issue across multiple connections. the file is on the local machine, but exposed via file share. so client and file server are both same Windows 2008 server.)

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >