Search Results

Search found 16768 results on 671 pages for 'folder compare'.

Page 226/671 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • Tween Animation Cannot Start

    - by David Dimalanta
    Do you have any reasons why my tween code didn't run or work? I already add the tween engine onto the library folder under LibGDX project folder and "Order and Export" it under Java Build Path at the Properties menu. My first two classes ran correctly and workly but my third class didn't work. Here's the sequence: First class is the first screen. Fade animation works on the company's logo. Second class is the second screen. Fade animation for the loading screen works. Third class is the third screen. After the second screen, now calls for the third screen. Animation stopped or won't run since I want the black screen to fade out at the start when the menu is here. Can you check if I did right? Look carefully by comment lines for explanation. //-----[ Animation Setup ]----- Tween.registerAccessor(Sprite.class, new Tween_Animation()); // --> Tween_Animation.java Tween_Manager = new TweenManager(); // --> I initialized it the TweenManager and seems okay. cb_start = new TweenCallback() // --> I'll use this when I choose START and the menu will fade in black. { @Override public void onEvent(int arg0, BaseTween<?> arg1) { goTo(); } }; Tween // --> This is where I focused the problem. .to(black_Sprite, Tween_Animation.ALPHA, 3f) .target(1) .ease(TweenEquations.easeInQuad) .repeatYoyo(200, 2.5f) // --> I set the repeat for 200 times when I noticed that the animation won't work! .start(Tween_Manager);

    Read the article

  • Unlock file on Windows Server 2003 without rebooting

    - by BalusC
    We've several Windows Server 2003 machines running, each with its own purposes. There are scheduled jobs which synchronizes some files over SFTP using WinSCP. Very sometimes a newly copied file is left locked in the "inbox" folder without any reason. The machine's own background task (programmed in Java) can't move it to the "processed" folder anymore after processing it. Manually moving it only yields the well known error message Cannot move [filename]: it is being used by another person or program. The only resort is to reboot the machine, but we would of course like to avoid that. Any suggestions? I tried Unlocker which works fine locally at WinXP, but doesn't work at those Win2K3 machines by remote desktop (unlock option doesn't show up in rightclick context menu).

    Read the article

  • To ORM or Not to ORM. That is the question&hellip;

    - by Patrick Liekhus
    UPDATE:  Thanks for the feedback and comments.  I have adjusted my table below with your recommendations.  I had missed a point or two. I wanted to do a series on creating an entire project using the EDMX XAF code generation and the SpecFlow BDD Easy Test tools discussed in my earlier posts, but I thought it would be appropriate to start with a simple comparison and reasoning on why I choose to use these tools. Let’s start by defining the term ORM, or Object-Relational Mapping.  According to Wikipedia it is defined as the following: Object-relational mapping (ORM, O/RM, and O/R mapping) in computer software is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a "virtual object database" that can be used from within the programming language. Why should you care?  Basically it allows you to map your business objects in code to their persistence layer behind them. And better yet, why would you want to do this?  Let me outline it in the following points: Development speed.  No more need to map repetitive tasks query results to object members.  Once the map is created the code is rendered for you. Persistence portability.  The ORM knows how to map SQL specific syntax for the persistence engine you choose.  It does not matter if it is SQL Server, Oracle and another database of your choosing. Standard/Boilerplate code is simplified.  The basic CRUD operations are consistent and case use database metadata for basic operations. So how does this help?  Well, let’s compare some of the ORM tools that I have used and/or researched.  I have been interested in ORM for some time now.  My ORM of choice for a long time was NHibernate and I still believe it has a strong case in some business situations.  However, you have to take business considerations into account and the law of diminishing returns.  Because of these two factors, my recent activity and experience has been around DevExpress eXpress Persistence Objects (XPO).  The primary reason for this is because they have the DevExpress eXpress Application Framework (XAF) that sits on top of XPO.  With this added value, the data model can be created (either database first of code first) and the Web and Windows client can be created from these maps.  While out of the box they provide some simple list and detail screens, you can verify easily extend and modify these to your liking.  DevExpress has done a tremendous job of providing enough framework while also staying out of the way when you need to extend it.  This sounds worse than it really is.  What I mean by this is that if you choose to follow DevExpress coding style and recommendations, the hooks and extension points provided allow you to do some pretty heavy lifting while also not worrying about the basics. I have put together a list of the top features that I have used to compare the limited list of ORM’s that I have exposure with.  Again, the biggest selling point in my opinion is that XPO is just a solid as any of the other ORM’s but with the added layer of XAF they become unstoppable.  And then couple that with the EDMX modeling tools and code generation, it becomes a no brainer. Designer Features Entity Framework NHibernate Fluent w/ Nhibernate Telerik OpenAccess DevExpress XPO DevExpress XPO/XAF plus Liekhus Tools Uses XML to map relationships - Yes - - -   Visual class designer interface Yes - - - - Yes Management integrated w/ Visual Studio Yes - - Yes - Yes Supports schema first approach Yes - - Yes - Yes Supports model first approach Yes - - Yes Yes Yes Supports code first approach Yes Yes Yes Yes Yes Yes Attribute driven coding style Yes - Yes - Yes Yes                 I have a very small team and limited resources with a lot of responsibilities.  In order to keep up with our customers, we must rely on tools like these.  We use the EDMX tool so that we can create a visual representation of the applications with our customers.  Second, we rely on the code generation so that we can focus on the business problems at hand and not whether a field is mapped correctly.  This keeps us from requiring as many junior level developers on our team.  I have also worked on multiple teams where they believed in writing their own “framework”.  In my experiences and opinion this is not the route to take unless you have a team dedicated to supporting just the framework.  Each time that I have worked on custom frameworks, the framework eventually becomes old, out dated and full of “performance” enhancements specific to one or two requirements.  With an ORM, there are a lot smarter people than me working on the bigger issue of persistence and performance.  Again, my recommendation would be to use an available framework and get to working on your business domain problems.  If your coding is not making money for you, why are you working on it?  Do you really need to be writing query to object member code again and again? Thanks

    Read the article

  • Chroot with CentOS 5.3 + openssh 4.3p2

    - by Scud
    OS: CentOS 5.3, with openssh 4.3p2 Trying to set 'chroot' in ssh shell, but openssh version prior to 4.8 doesn't take below settings. yum update openssh open up to version 4.3 which is quite old. Doesn't CentOS support openssh 4.8 or up? If that's the case, how to set chroot with openssh 4.3? or is it better to just using FTP? My purpose is limit SFTP or FTP access to certain folder, not root folder. Thanks! Match group sftponly ChrootDirectory /home/%u X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp

    Read the article

  • mono: application not running when web.config in subfolder

    - by Quandary
    I run asp.net on mono. I wanted to run blogengine: http://www.dotnetblogengine.net/ I downloaded, put it in /var/www/blogengine, on the console went to /var/www/blogengine and started xsp2. I went to http://localhost:8080, and it ran without problems. Then I stopped xsp2, went to /var/www and started xsp2. I went to http://localhost:8080/blogengine It ended with a strange error: The section <blogProvider> can't be defined in this configuration file (the allowed definition context is 'MachineToApplication'). (/var/www/blogengine/Web.Config line 9) The problem seems to be that it stops working as soon as the xsp2 root folder is anything else than the application root folder... Do I have to config anything ? Or what else is wrong ?

    Read the article

  • Error "403 Forbidden" on Sharepoint Search Settings Page

    - by user21924
    Hello I thought I had solved this nightmare by re-entering the values in my SSP properties set up, however accessing the Search Settings page error has reared it ugly head again. Now all solutions point to this method listed here * http://www.routtlogics.com/blog/Lists/Posts/Post.aspx?ID=6 * http://social.technet.microsoft.com/Forums/en-US/sharepointadmin/thread/f00651cd-e452-45b9-b19e-90e89c3c3ad4 * http://blogs.technet.com/sushrao/archive/2009/03/26/microsoft-office-sharepoint-server-2007-moss-403-forbidden-error-when-clicked-on-search-settings-page.aspx The above workaround(s) basically states that granting the local group WSS_WPG read and write permission to the Task folder in the Windows directory would solve the problem, however whenever I try to change to the permission attribute of this folder I get an access denied message, even when logged in as a Domain administrator, Enterprise and even the SharePoint Farm administrator. Please guys how do I get around this access denied issue. Thanks

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • mount_afp on linux, user rights

    - by Antonio Sesto
    I need to mount a remote filesystem on a linux box using the afp protocol. The linux box runs an old Debian 4. I downloaded the source code of mount_afp, compiled it and installed it with all the required packages. Then created /etc/fuse with the following command: mknod /dev/fuse c 10 229 (according to the instructions here) I can mount the remote filesystem as root by executing: mount_afp afp://USER:PASSWD@REMOTE_SERVER/FOLDER /mnt/MOUNTPOINT/ but the same command fails when run as normal user (of the local machine). After reading here and there, I created a group fuse, and added my normal user U to the group fuse: [prompt] groups U U fuse Then modified the group of /dev/fuse, that now has the following rights: 0 crwxrwx--- 1 root fuse 10, 229 Feb 8 15:33 /dev/fuse However, if the user U tries to mount the remote filesystem by using the same command as above, U gets the following error: Incorrect permissions on /dev/fuse, mode of device is 20770, uid/gid is 0/1007. But your effective uid/gid is 1004/1004 But the user U with uid 1004 has also gid 1007 (group fuse). I might think the problem is related to real/effective/etc. ID, but I do not know how to proceed and could not find any clear instructions. Could you please help me? There is also another problem. If I mount /mnt/MOUNTPOINT as root and run ls -l /mnt, I get: drwxrwxrwx 15 root root 466 Feb 8 16:34 MONTPOINT If I run ls -l /mnt as normal user U I get: ? ?????????? ? ? ? ? ? MOUNTPOINT in fact when I try to cd /mnt/MOUNTPOINT I get: $-> cd /mnt/MOUNTPOINT -sh: cd: /mnt/MOUNTPOINT: Not a directory Then I unmount /mnt/MOUNTPOINT as root and run again ls -l /mnt as normal user U I get: 0 drwxr-xr-x 2 root root 6 Feb 8 15:32 MOUNTPOINT/ After reading Frank's answer, I killed every shell/process running with privileges of user U. Still U cannot mount the remote filesystem, but the error message has changed. Now it is: "Login error: Authentication failed". The problem is not related to remote login/password since the same command works perfectly when run as root of the local machine. Since I cannot get mount_afp to work with normal users, I decided to follow mgorven's suggestion. So I run the commands: mount_afp -o allow_other afp://USER:PASSWD@REMOTE_SERVER/FOLDER /mnt/MOUNTPOINT/ and mount_afp -o user=U afp://USER:PASSWD@REMOTE_SERVER/FOLDER /mnt/MOUNTPOINT/ The mount succeeds but user U cannot access the mount point. If U executes ls -l in /mnt U@LOCAL_HOST [/mnt] $-> ls -l ls: cannot access MOUNT_POINT: Permission denied total 0 ? ?????????? ? ? ? ? ? MOUNT_POINT Is it so hard to have this utility working?

    Read the article

  • FontExplorer X + Dropbox

    - by ohnno
    I'm looking to sync my fonts between my Macbook and iMac using a combination of FontExplorer X and Dropbox. I've seen a few blog posts that confirm that it is possible by using a symbolic link on the database folder. But for some reason I can't seem to find that folder (I'm thinking it may have been changed/moved in the newer releases of FEX?) It in this line [Application Support/Linotype/Font Explorer/fontexplorer x.fexdb] But the Linotype no longer exists in Application Support. Any help with this would be greatly appreciated!

    Read the article

  • Improving the state of the art in API documentation sites

    - by Daniel Cazzulino
    Go straight to the site if you want: http://nudoq.org. You can then come back and continue reading :) Compare some of the most popular NuGet packages API documentation sites: Json.NET EntityFramework NLog Autofac You see the pattern? Huge navigation tree views, static content with no comments/community content, very hard (if not impossible) to search/filter, etc. These are the product of automated tools that have been developed years ago, in a time where CHM help files were common and even expected from libraries. Nowadays, most of the top packages in NuGet.org don’t even provide an online documentation site at all: it’s such a hassle for such a crappy user experience in the end! Good news is that it doesn’t have to be that way. Introducing NuDoq A lot has changed since those early days of .NET. We now have NuGet packages and the awesome channel that is ...Read full article

    Read the article

  • New Release Overview Part 1

    - by brian.harrison
    Ladies & Gentlemen, I have been getting a lot of questions over the last month or two about the next release of WCI codenamed "Neo". Unfortunately I cannot give you an exact release date which I know you all would be asking me for if we were talking face to face, but I can definitely provide you with information about some of the features that will be made available. So over the next few blog entries, I am going to provide you with details about two features and even provide you with screenshots for some of them. KD Browser Portlet This portlet will provide a windows explorer look and feel to the Knowledge Directory from with a Community Page or My Page. Not only will the portlet provide access to the folder structure and the documents within, but the user or community manager will also have the ability to modify what is being shown. From with a preferences page, the user or community manager can change what top-level folders are shown within the folder structure as well as what properties are available for each document that is shown. There are also a number of other portlet specific customizations available as well. Embedded Tagging Engine As some of you might be aware, there was a product made available just prior to the Oracle acquisition known as Pathways which gave users the ability to add tags to documents that were either in the Knowledge Directory or in the Collaboration Documents section. Although this product is no longer available separately for customers to purchase, we definitely did feel that the functionality was important and interesting enough that other customers should have access to it. The decision was made for this release to embed the original Pathways product as the Tagging Engine for WCI and Collaboration. This tagging engine will allow a user to add tags to a document as well as through the Collaboration Documents section. Once the tags are added to the Tagging Engine and associated with documents, then a user will have the ability to filter the documents when processing a search according to the Tags Cloud that will now be available on the Search Results page and this will be true no matter what kind of search is being processed. In addition to all of that, all of the Pathways portlets will also be available for users to add to their My Page.

    Read the article

  • Procmail lock failures and errors while writing

    - by user58292
    I'm setting up a mail server on an embedded linux system. When sending mail to a local user I get the following error from procmail: procmail: Lock failure on "/home/mail/ktos/.mailspool.lock" procmail: Error while writing to "/home/mail/ktos/.mailspool" procmail: Error while writing to "/var/spool/mail/ktos" From root@waben Wed Dec 15 10:00:40 2010 Folder: **Bounced** 0 procmail: Lock failure on "/root/.mailspool.lock" procmail: Error while writing to "/root/.mailspool" From MAILER-DAEMON Wed Dec 15 10:00:41 2010 Subject: Returned mail: see transcript for details Folder: /var/spool/mail/root 1732 And the mail goes to /var/spool/mail/root. This is my /etc/procmailrc: PATH=/usr/bin:/usr/local/bin MAILDIR=$HOME/.mailspool DEFAULT=$HOME/.mailspool LOGFILE=/dev/pts/0 SHELL=/bin/sh What could be the problem? I'm still pretty green with all the sendmail and procmail stuff as I'm primarily a developer.

    Read the article

  • DPM 2010 PowerShell Script to Easily Restore Multiple Files

    - by bmccleary
    I’ve got what I thought would be a simple task with Data Protection Manager 2010 that is turning out to be quite frustrating. I have a file server on one server and it is the only server in a protection group. This file server is the repository for a document management application which stores the files according to the data within a SQL database. Sometimes users inadvertently delete files from within our application and we need to restore them. We have all the information needed to restore the files to include the file name, the folder that the file was stored in and the exact date that the file was deleted. It is easy for me to restore the file from within the DPM console since we have a recovery point created every day, I simply go to the day before the delete, browse to the proper folder and restore the file. The problem is that using the DPM console, the cumbersome wizard requires about 20 mouse clicks to restore a single file and it takes 2-4 minutes to get through all the windows. This becomes very irritating when a client needs 100’s of files restored… it takes all day of redundant mouse clicks to restore the files. Therefore, I want to use a PowerShell script (and I’m a novice at PowerShell) to automate this process. I want to be able to create a script that I pass in a file name, a folder, a recovery point date (and a protection group/server name if needed) and simply have the file restored back to its original location with some sort of success/failure notification. I thought it was a simple basic task of a backup solution, but I am having a heck of a time finding the right code. I have seen the sample code at http://social.technet.microsoft.com/wiki/contents/articles/how-to-use-a-windows-powershell-script-to-recover-an-item-in-data-protection-manager.aspx that I have tried to follow, but it doesn’t accomplish what I really want to do (it’s too simplistic) and there are errors in the sample code. Therefore, I would like to get some help writing a script to restore these files. An example of the known values to restore the data are: DPM Server: BACKUP01 Protection Group: Document Repository Data Protected Server: FILER01 File Path: R:\DocumentRepository\ToBackup\ClientName\Repository\2010\07\24\filename.pdf Date Deleted: 8/2/2010 (last recovery point = 8/1/2010) Bonus Points: If you can help me not only create this script, but also show me how to automate by providing a text file with the above information that the PowerShell script loops through, or even better, is able to query our SQL server for the needed data, then I would be more than willing to pay for this development.

    Read the article

  • Top Oracle Validated Integration Partner Headlines - 28 Oct

    - by Roxana Babiciu
    Five9’s Cloud Contact Center Software Achieves Oracle Validated Integration with Oracle Service Cloud. Read more. eSkill Corporation Achieves Oracle Validated Integration with Oracle Taleo Business Edition Cloud Service. Read more. BEAM Compare Achieves Oracle Validated Integration with Oracle’s PeopleSoft 9.2. Read more. Enterprise Imaging Platform from Canon Information and Imaging Solutions, Inc. Achieves Oracle Validated Integration with Oracle's JD Edwards EnterpriseOne. Read more.

    Read the article

  • Plex won't enter my home directory or other partitions

    - by RobinJ
    I just installed the Plex media server from the Ubuntu Software Center, and opened the web interface. I wanted to start by adding a collection. When it gave me a file browser, I wanted to go to /home/robin/Videos. /home is as far as I got. It showed robin, with an arrow in front of it, but when I tried to expand the directory tree it was empty. The same happened when trying to access /media/Data. For me it's quite useless like this, as all of my media files are inside those 2 directories. Help would be much appreciated. My first guess seemed to be a correct one; It is, as always, a permissions problem. How do I give plex access to my home folder without also giving other users access to it? My home folder is encrypted by the way, so that'll probably complicate things a little. robin@RobinJ:~$ sudo -u plex bash [sudo] password for robin: bash: /home/robin/.bashrc: Permission denied plex@RobinJ:~$ ls -al ls: cannot open directory .: Permission denied plex@RobinJ:~$ cd /home plex@RobinJ:/home$ cd robin bash: cd: robin: Permission denied plex@RobinJ:/home$ ls -al robin ls: cannot open directory robin: Permission denied

    Read the article

  • Retrieving Chrome History and History Archives for Mac

    - by Justin
    I've read some articles here that suggest that we are able to retrieve Chrome's history as SQLite databases: Export history from chome browser How can I view "archived" Google Chrome history — i.e. history older than three months? I can't seem to find the folder where these are supposed to be located. In the first article, the provided path was ~/Library/Application\ Support/Google/Chrome/Default/History, but there is no such directory in my filesystem. I am using Chrome 26.0.1410.65 on Mac OS X Lion. And I am signed in with my Google account. Article http://unlockforus.blogspot.it/2008/09/how-opening-google-chrome-files-history.html provided in the last of the above is very useful, but it is only for Windows. Is there something similar to that but for Mac? Is there a way to find the folder where Chrome stores my history?

    Read the article

  • xubuntu 12.04 totally refuses to boot after install with wubi on old computer. any tips or suggestions on things i can do to fix this problem?

    - by WizardinBlack
    so, i downloaded xubuntu 12.04 32-bit (not alt version) from the xubuntu.org website and placed the .zip into a folder named XUBUNTU on my desktop. i then extracted the files to that folder and ran wubi. i selected xubuntu from the drop down and clicked install. i let it do its thing and then chose the option to reboot when prompted. this is when i went to my girlfriends house for the day and left my laptop (dell latitude 120L) to finish the install. i came back home (about 8 hours later) to find my computer on the windows 7 log in screen when i rebooted my computer once more. i booted xubuntu just fine and things seemed to work perfectly. i turned off the pc and went to bed. next morning i booted xubuntu again without problems. i rebooted and went to windows, then next time i tried booting xubuntu in the middle of the splash screen it went black and the cpu light stopped blinking. i force shut down the computer and tried booting again when the same thing happened (instead of going black the splash just froze). i went back to windows and uninstalled and reintalled the os and havent beej able to boot it since. i even tried booting lubuntu with the same problem. please help!

    Read the article

  • Week in Geek: Google Asks for Kids’ Social Security Numbers Edition

    - by Asian Angel
    This week we learned how to make hundreds of complex photo edits in seconds with Photoshop actions, use an Android Phone as a modem with no rooting required, install a wireless card in Linux using Windows drivers, change Ubuntu’s window borders with Emerald, how noise reducing headphones work, and more. Photo by Julian Fong. Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Preliminary List of Keyboard Shortcuts for Unity Now Available Bring a Touch of the Wild West to Your Desktop with the Rango Theme for Windows 7 Manage Your Favorite Social Accounts in Chrome and Iron with Seesmic E.T. II – Extinction [Fake Movie Sequel Video] Remastered King’s Quest Games Offer Classic Gaming on Modern Machines Compare Your Internet Cost and Speed to Global Averages [Infographic]

    Read the article

  • sqlsrv not showing up in my phpinfo

    - by sirg45
    I have just installed php 5.3 on windows server 2008 R2 running IIS7. phpinfo() is working fine. now I want to see if I have correctly installed the Microsoft Drivers for PHP for SQL Server. I downloaded from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=80E44913-24B4-4113-8807-CAAE6CF2CA05#RelatedResources I have dropped the 2 dlls (php_pdo_sqlsrv_53_nts_vc9.dll and php_sqlsrv_53_nts_vc9.dll) into the PHP\ext folder and referenced them in the php.ini I restarted the server. But when I run phpinfo() I'm not seeing any reference to sqlsrv is that normal? or should there also be a section of phpinfo() dedicated to these sqlsrv extensions? Error logging is on but there are no errors coming up in the php-errors.log referring to sqlsrv. Both files php_pdo_sqlsrv_53_nts_vc9.dll and php_sqlsrv_53_nts_vc9.dll have been added (non thread safe version for IIS), php5.dll is present in the php install folder. Thanks for any pointers.

    Read the article

  • automysqlbackup not working weirdly on shared host

    - by KPL
    I'm on HostGator and I have not-root-level SSH access. I manually setup automysqlbackup in a /home/user/automysqlbackup folder. Created a file called runbackup in the same folder, chmod +x'ed it. The content : /home/user/automysqlbackup/automysqlbackup /home/user/automysqlbackup/myserver.conf Now, when I run ./runbackup from shell, no email is sent. I've setup a daily cron job. The crontab line reads - 24 06 * * * /home/user/automysqlbackup/runbackup When the crontab is run, I do get an email, but, the subject is WARNING: Error Reported - MySQL Backup Log and SQL Files for localhost - 012-03-19_06h24m The email does have an attachment, but it just has the SQL for creating tables. No data in the several rows at all. I don't know what wrong I'm doing and this is freaking me out! I tried writing a custom script as well, using this guide, but mutt doesn't send email, then. The OS used by HostGator is CentOS.

    Read the article

  • Error "403 Forbidden" on Sharepoint Search Settings Page

    - by user21924
    Hello I thought I had solved this nightmare by re-entering the values in my SSP properties set up, however accessing the Search Settings page error has reared it ugly head again. Now all solutions point to this method listed here * http://www.routtlogics.com/blog/Lists/Posts/Post.aspx?ID=6 * http://social.technet.microsoft.com/Forums/en-US/sharepointadmin/thread/f00651cd-e452-45b9-b19e-90e89c3c3ad4 * http://blogs.technet.com/sushrao/archive/2009/03/26/microsoft-office-sharepoint-server-2007-moss-403-forbidden-error-when-clicked-on-search-settings-page.aspx The above workaround(s) basically states that granting the local group WSS_WPG read and write permission to the Task folder in the Windows directory would solve the problem, however whenever I try to change to the permission attribute of this folder I get an access denied message, even when logged in as a Domain administrator, Enterprise and even the SharePoint Farm administrator. Please guys how do I get around this access denied issue. Thanks

    Read the article

  • Do all Mac OS X applications require Admin permissions to 'install'?

    - by Andy
    I'm new to the whole Mac OS X operating system. I'm trying to learn and I've got myself a MacBook running Mac OS X 10.7.3. I've created a test user that can not administrate so that I can test out permissions and I've found that I can not do anything in the Applications folder, which includes 'installing' applications (even those drag 'n' drop ones) and creating folders, without entering an Admin name and password. However, I was under the impression that this wasn't the case and you only needed Admin permissions to write to somewhere like Preferences, so can somebody please clarify why it is asking for Admin when I try to drag 'n' drop applications into the Applications folder.

    Read the article

  • Measuring Usability with Common Industry Format (CIF) Usability Tests

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience A User-centered Research and Design Process The Oracle Fusion Applications user experience was five years in the making. The development of this suite included an extensive and comprehensive user experience design process: ethnographic research, low-fidelity workflow prototyping, high fidelity user interface (UI) prototyping, iterative formative usability testing, development feedback and iteration, and sales and customer evaluation throughout the design cycle. However, this process does not stop when our products are released. We conduct summative usability testing using the ISO 25062 Common Industry Format (CIF) for usability test reports as an organizational framework. CIF tests allow us to measure the overall usability of our released products.  These studies provide benchmarks that allow for comparisons of a specific product release against previous versions of our product and against other products in the marketplace. What Is a CIF Usability Test? CIF refers to the internationally standardized method for reporting usability test findings used by the software industry. The CIF is based on a formal, lab-based test that is used to benchmark the usability of a product in terms of human performance and subjective data. The CIF was developed and is endorsed by more than 375 software customer and vendor organizations led by the National Institute for Standards and Technology (NIST), a US government entity. NIST sponsored the CIF through the American National Standards Institute (ANSI) and International Organization for Standardization (ISO) standards-making processes. Oracle played a key role in developing the CIF. The CIF report format and metrics are consistent with the ISO 9241-11 definition of usability: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” Our goal in conducting CIF tests is to measure performance and satisfaction of a representative sample of users on a set of core tasks and to help predict how usable a product will be with the larger population of customers. Why Do We Perform CIF Testing? The overarching purpose of the CIF for usability test reports is to promote incorporation of usability as part of the procurement decision-making process for interactive products. CIF provides a common format for vendors to report the methods and results of usability tests to customer organizations, and enables customers to compare the usability of our software to that of other suppliers. CIF also enables us to compare our current software with previous versions of our software. CIF Testing for Fusion Applications Oracle Fusion Applications comprises more than 100 modules in seven different product families. These modules encompass more than 400 task flows and 400 user roles. Due to resource constraints, we cannot perform comprehensive CIF testing across the entire product suite. Therefore, we had to develop meaningful inclusion criteria and work with other stakeholders across the applications development organization to prioritize product areas for testing. Ultimately, we want to test the product areas for which customers might be most interested in seeing CIF data. We also want to build credibility with customers; we need to be able to make the case to current and prospective customers that the product areas tested are representative of the product suite as a whole. Our goal is to test the top use cases for each product. The primary activity in the scoping process was to work with the individual product teams to identify the key products and business process task flows in each product to test. We prioritized these products and flows through a series of negotiations among the user experience managers, product strategy, and product management directors for each of the primary product families within the Oracle Fusion Applications suite (Human Capital Management, Supply Chain Management, Customer Relationship Management, Financials, Projects, and Procurement). The end result of the scoping exercise was a list of 47 proposed CIF tests for the Fusion Applications product suite.  Figure 1. A participant completes tasks during a usability test in Oracle’s Usability Labs Fusion Supplier Portal CIF Test The first Fusion CIF test was completed on the Supplier Portal application in July of 2011.  Fusion Supplier Portal is part of an integrated suite of Procurement applications that helps supplier companies manage orders, schedules, shipments, invoices, negotiations and payments. The user roles targeted for the usability study were Supplier Account Receivables Specialists and Supplier Sales Representatives, including both experienced and inexperienced users across a wide demographic range.  The test specifically focused on the following functionality and features: Manage payments – view payments Manage invoices – view invoice status and create invoices Manage account information – create new contact, review bank account information Manage agreements – find and view agreement, upload agreement lines, confirm status of agreement lines upload Manage purchase orders (PO) – view history of PO, request change to PO, find orders Manage negotiations – respond to request for a quote, check the status of a negotiation response These product areas were selected to represent the most important subset of features and functionality of the flow, in terms of frequency and criticality of use by customers. A total of 20 users participated in the usability study. The results of the Supplier Portal evaluation were favorable and exceeded our expectations. Figure 2. Fusion Supplier Portal Next Studies We plan to conduct two Fusion CIF usability studies per product family over the next nine months. The next product to be tested will be Self-service Procurement. End users are currently being recruited to participate in this usability study, and the test sessions are scheduled to begin during the last week of November.

    Read the article

  • LAMP/TURNKEY LINUX/VIRTUAL BOX: Manipulating Files on a Virtual Machine

    - by aeid
    Hi, I am running Ubuntu 9.10 and I want to install turnkey linux's LAMP server on my machine to test out my code. I installed Turnkey LAMP via VirtualBox and it seems to be working because I can access the http://localhost. My question is: How do I manipulate files via VirtualBox? For example, if I had installed LAMP on my machine (not on a virtual machine), I could easily add/edit/delete files in the var/WWW folder. Where is the equivalent of "WWW" folder on Virtualbox and how can I interface with it? Thanks,

    Read the article

  • How to Create an Easy Pixel Art Avatar in Photoshop or GIMP

    - by Eric Z Goodnight
    Boingboing.net has a cool set of meticulously drawn pixel art portraits for their key writers. If you’re a lover of pixel art, why not try and recreate a similar avatars for yourself with a few simple filters in either Photoshop or GIMP? How-To Geek has covered a few different ways to create pixel art from ordinary graphics, and this simple method is more simple pixel art, but using a different technique. Watch as we transform two ordinary photographs into blocky masterpieces, as well as compare the techniques used between Photoshop and the GIMP. Read on!  How to Create an Easy Pixel Art Avatar in Photoshop or GIMPInternet Explorer 9 Released: Here’s What You Need To KnowHTG Explains: How Does Email Work?

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >