Search Results

Search found 38203 results on 1529 pages for 'library development'.

Page 1183/1529 | < Previous Page | 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190  | Next Page >

  • iphone - How do I add videos to iPad simulator?

    - by Mike
    No, dropping the videos to ~/Library/Application Support/iPhone Simulator/3.2/Media/DCIM/100APPLE does not work totally, because the simulator can see the video on Photos.app, but when I try to pick a video using UIImagePickerController my application crashes. I think this may have some relation to the format the video has to have. I am using QuickTime to generate the video. I am using the settings "for iPhone"... so it is generating a M4V with 480x360 pixels H264. I have tried to create a MOV with the same characteristics and one with 640x480 but nothing works. I have also dropped a movie created with iPhone 3GS and it still crashes. this is the error I see when it crashes * Terminating app due to uncaught exception 'NSRangeException', reason: '* -[NSCFArray objectAtIndex:]: index (1) beyond bounds (0)' the method didFinishPickingMediaWithInfo is never called, so its some issue on the simulator or on the video. The app crashes as soon as I pick the video. any ideas? thanks.

    Read the article

  • C++ Direct 2D How To Resize ID2D1Bitmap

    - by Nkosi Dean
    I'm making myself a simple GUI library for a game I'm making and each control needs to have a Bitmap to draw to when the control needs redrawing. When it does not need to be redrawn, it will have an already made bitmap ready to display to the screen. Since controls can be resized, this Bitmap also needs to be resized so the control can be fully drawn into it properly. How can I achieve this since it does not appear to be a Resize method to resize a bitmap, unlike an ID2D1HwndRenderTarget, which can be resized?

    Read the article

  • DirectShow - passing parameters to custom source push filter

    - by mkurek
    Hello, I'm working on a solution that will be used to receive video stream from remote hosts and to put various texts on the top of it. Currently it consists of custom DirectShow push filter (C++) which receives data from remote hosts using RTP protocol and tiny C# application that sets up the DirectShow graph and is used as a container for the video. I'm using DirectShowLib interop library. However, I'm not sure how to pass parameters from this C# app to my custom filter. What are possible ways to do it?

    Read the article

  • NFS to NFS mount

    - by dude
    I have a machine that I need to bridge NFS files to. Can I mount an NFS directory on machine2 from machine1 and then mount the mounted NFS directory on machine2 on machine3 via NFS? Do you see any problems with that? I am basically bridging some subnet domains this way, in a certain fashion. My development machine is on a different and separate (unbridged) than where I would like to use the files, and I would like this machine1(dev machine) - machine2(passthrough machine) - machine3(test machine) connection. And no there is no way to move the test machine as it's a chassis :) and it's two buildings away.

    Read the article

  • Is Programming == Math?

    - by moffdub
    I've heard many times that all programming is really a subset of math. Some suggest that OO, at its roots, is mathematically based. I don't get the connection. Aside from some obvious examples: using induction to prove a recursive algorithm formal correctness proofs functional languages lambda calculus asymptotic complexity DFAs, NFAs, Turing Machines, and theoretical computation in general the fact that everything on the box is binary In what ways is programming really a subset of math? I'm looking for an explanation that might have relevance to enterprise/OO development (if there is a strong enough connection, that is). Thanks in advance. Edit: as I stated in a comment to an answer, math is uber important to programming, but what I struggle with is the "subset" argument.

    Read the article

  • IE8 Jquery Javascript "Error: Object required" Bug

    - by thechrisvoth
    IE8 throws an "Error: Object required" message (error in the actual jquery library script, not my javascript file) when the switch statement in this function runs. This code works in IE6, IE7, FF3, and Safari... Any ideas? Does it have something to do with the '$(this)' selector in the switch? Thanks! function totshirts(){ $('.shirt-totals input').val('0'); var cxs = 0; var cs = 0; var cm = 0; $.each($('select.size'), function() { switch($(this).val()){ case "cxs": cxs ++; $('input[name="cxs"]').val(cxs); break; case "cs": cs ++; $('input[name="cs"]').val(cs); break; case "cm": cm ++; $('input[name="cm"]').val(cm); break; } }); }

    Read the article

  • ASP.NET Routing : RouteCollection class missing

    - by Shyju
    I am developing a website(web forms , not MVC) in VS 2008(with SP1 ).I am trying to incorporate the ASP.NET Routing.I am following the MSDN tutorial to do it. http://msdn.microsoft.com/en-us/library/cc668201.aspx I have added the below items to my glbal.asax.cs file as per the tutorial protected void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } public static void RegisterRoutes(RouteCollection routes) { routes.Add(new Route ( "Category/{action}/{categoryName}" , new CategoryRouteHandler() )); } When trying to build it is telling like "The type or namespace name 'RouteCollection' could not be found (are you missing a using directive or an assembly reference?) I have System.web imported to my global.asax file Any Idea how to get rid of it ?

    Read the article

  • Create a subarray reference in C# (using unsafe ?)

    - by Wam
    Hello there, I'm refactoring a library we currently use, and I'm faced with the following problem. We used to have the following stuff : class Blah { float[][] data; public float[] GetDataReference(int index) { return data[index]; } } For various reasons, I have replaced this jagged array version with a 1 dimensionnal array version, concatenating inner arrays. My question is : how can I still return a reference to a sub array of data ? class Blah { float[] data; int rows; public float[] GetDataReference(int index) { // Return a reference data from offset i to offset j; } } I was thinking that unsafe and pointers stuff may be of use, is it doable ?

    Read the article

  • Grid view in iPhone SDK

    - by Jack
    Hi, I would like to create a grid view in my iPhone app similar to that shown in the iPad photos app. I know that the iPhone 3.2 SDK is under NDA, but is there a library or framework available for adding this kind of functionality (non-SDK). Ideally I would like to eventually develop an iPad version of the app, where the grid would be 3 items wide in portrait and 4 in landscape, however for the time being I would like 2 items wide in portrait and 3 wide in landscape. The only way I can think of doing this is by subclassing UITableView and having a custom cell that creates the 2 or 3 items. This however seems messy and I am sure that there is a better way. A typical item will have a picture, label and button - nothing too complicated. Thanks

    Read the article

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

  • About unit testing a function in the zend framework and unit testing in general

    - by sanders
    Hello people, I am diving into the world of unit testing. And i am sort of lost. I learned today that unit testing is testing if a function works. I wanted to test the following function: public function getEventById($id) { return $this->getResource('Event')->getEventById($id); } So i wanted to test this function as follows: public function test_Event_Get_Event_By_Id_Returns_Event_Item() { $p = $this->_model->getEventById(42); $this->assertEquals(42, EventManager_Resource_Event_Item_Interface); $this->assertType('EventManager_Resource_Event_Item_Interface', $p); } But then I got the error: 1) EventTest::test_Event_Get_Event_By_Id_Returns_Event_Item Zend_Db_Table_Exception: No adapter found for EventManager_Resource_Event /home/user/Public/ZendFramework-1.10.1/library/SF/Model/Abstract.php:101 /var/www/nrka2/application/modules/eventManager/models/Event.php:25 But then someone told me that i am currently unit testing and not doing an integration test. So i figured that i have to test the function getEventById on a different way. But I don't understand how. What this function does it just cals a resource and returns the event by id.

    Read the article

  • Raphael JS - Parsing an SVG on the fly

    - by Chris
    I found a neat SVG parser at http://bkp.ee/atirip/ which parses an SVG file and outputs it into javascript that uses the Raphael JS library (raphaeljs.com). You'll notice in the source code at http://bkp.ee/atirip/svg2rdemo.php : <script> jQuery(document).ready( function() { $("#c1").each(function(){ var c = Raphael(this, 190, 154, 0, 0); var g1 = c.set(); ... it creates variables like g1, g2, etc. But it also reuses these variables. I would like to create unique variables for each group. In my .ai file, I have named my groups and I would like to use these names to create the variable names. Where in http://bkp.ee/atirip/f/svgToRaphaelParser.php.zip should I look to make this change?

    Read the article

  • Giving a child window focus in IE8

    - by Andrew K
    I'm trying to launch a popup window from a Javascript function and ensure it has focus using the following call: window.open(popupUrl, popupName, "...").focus(); It works in every other browser, but IE8 leaves the new window in the background with the flashing orange taskbar notification. Apparently this is a feature of IE8: http://msdn.microsoft.com/en-us/library/ms536425%28VS.85%29.aspx It says that I should be able to focus the window by making a focus() call originating from the new page, but that doesn't seem to work either. I've tried inserting window.focus() in script tags in the page and the body's onload but it has no effect. Is there something I'm missing about making a focus() call as the page loads, or another way to launch a popup that IE8 won't hide?

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • Best "For Pay" wpf controls

    - by Vaccano
    If this question has been asked then I applogize and I will join in voting to close it (or just delete it), but I could not find it being asked before. We are potentially embarking on making many of our apps using WPF. Most of our apps are normal business apps that will not need too much eye candy. Tasteful ui is nice, but I don't see us doing lost of custom animations and such. So, my question is what 3rd party control sets are the best ones to purchase to save you time in development of apps like this? (These should work with both Visual Studio 2008 and 2010.)

    Read the article

  • How to choose between protobuf-csharp-port and protobuf-net

    - by PierrOz
    Hi Folks, I've recently had to look for a C# porting of the Protocole Buffer library originally developped by Google. And guess what, I found two projects owned both by two very well known persons here: protobuf-csharp-port, written by Jon Skeet and protobuf-net, written by Mark Gravell. My question is simple: which one do I have to choose ? I quite like Mark's solution as it seems to me closer to C# philisophy (for instance, you can just add attributes to the properties of existing class) and it looks like it can support .NET built-in types such as System.Guid. I am sure both of them are really great projects but what's your oppinion?

    Read the article

  • Import Eclipse Plugin

    - by Timo
    Hey everybody, I developed an Eclipse plugin (a view) using the plugin development version of Eclipse. I exported the plugin as a jar archive and now I would like import it in another version of Eclipse, but it doesn't recognize the plugin. This is my first plugin so I might have forgotten something obvious. I searched the internet for hours but didn't find anything. Does anybody know what else I have to do? Btw, the dependencies of my plugin are: org.eclipse.ui, org.eclipse.core.runtime, org.eclipse.core.filesystem, edu.tum.cs.eclipse.commons, org.eclipse.ui.ide, org.eclipse.core.resources The version I'd like to import it in is the default Eclipse for PHP Developers from eclipse.org. The feature list can be found on http://www.eclipse.org/downloads/packages/eclipse-php-developers/galileosr2 Thanks for you help!

    Read the article

  • Save image to camera roll with UIImageWriteToSavedPhotosAlbum

    - by Momeks
    Hi , i try to save a photo with a button to camera roll after user capture a picture with Camera , but i dont know why my picture doesn't save at photo library !! here is my code : -(IBAction)savePhoto{ UIImageWriteToSavedPhotosAlbum(img.image,nil, nil, nil); } -(IBAction)takePic { ipc = [[UIImagePickerController alloc]init]; ipc.delegate = self; ipc.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentModalViewController:ipc animated:YES]; } - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { img.image = [[info objectForKey:UIImagePickerControllerOriginalImage]retain]; [[picker parentViewController]dismissModalViewControllerAnimated:YES]; [picker release]; } ipc is UIImagePickerController and img is UIIMageView what's my problem ?

    Read the article

  • ASP.NET MVC Areas Application Using Multiple Projects

    - by harrisonmeister
    Hi I have been following this tutorial: http://msdn.microsoft.com/en-us/library/ee307987(VS.100).aspx#registering_routes_in_account_and_store_areas and have an application (a bit more complex) like this set up. All the areas are working fine, however I have noticed that if I change the project name of the Accounts project to say Areas.Accounts, that it wont find any of my views within the accounts project due to the Area name not being the same as the project name e.g. the accounts routes.cs file still has this: public override string AreaName { get { return "Accounts"; } } Does anyone know why I would have to change it to this: public override string AreaName { // Needs to match the project name? get { return "Areas.Accounts"; } } for my views in the accounts project to work? I would really like the AreaName to still be Accounts, but for ASP.net MVC to look in the "Views\Areas\Areas.Accounts\" folder when its all munged into one project, rather than trying to find it within "View\Areas\Accounts\" Thanks Mark

    Read the article

  • How to ignore GUI as much as possible without rendering APP less GUI developer friendly

    - by pbernatchez
    The substance of an app is more important to me than its apperance, yet GUI always seems to dominate a disproportionate percentage of programmer time, development and target resource requirements/constraints. Ideally I'd like an application architecture that will permit me to develop an app using a lightweight reference GUI/kit and focus on non gui aspects to produce a quality app which is GUI enabled/friendly. I would want APP and the GUI to be sufficiently decoupled to maximize the ease for you GUI experts to plug the app into to some target GUI design/framework/context. e.g. targets such as: termcap GUI, web app GUI framework, desktop GUI, thin client GUI. In short: How do I mostly ignore the GUI, but avoid painting you into a corner when I don't even know who you are yet?

    Read the article

  • WMI Win32_OperatingSystem OSArchitecture field causes exception

    - by Andrew J. Brehm
    I am trying to get information on the version of Windows installed from WMI. Most fields work. I can get the operating system "Name" as well as the "Version", both are fields of the Win32_OperatingSystem object I have. But another field "OSArchitecture" generates an exception ("Not found"). strScope = "\\" + strServer + "\root\CIMV2" searcher = New ManagementObjectSearcher(strScope, "SELECT * FROM Win32_OperatingSystem") For Each mo In searcher.Get strOSName = mo("Name") strOSVersion = mo("Version") strOSArchitecture = mo("Architecture") strStatus = mo("Status") strLastBoot = mo("LastBootUpTime") Next The documentation says that the field ought to exist and is a String: http://msdn.microsoft.com/en-us/library/aa394239(VS.85).aspx Any ideas?

    Read the article

  • WMI Win32_OperatingSystem OSArchitecture field causes exception

    - by Andrew J. Brehm
    I am trying to get information on the version of Windows installed from WMI. Most fields work. I can get the operating system "Name" as well as the "Version", both are fields of the Win32_OperatingSystem object I have. But another field "OSArchitecture" generates an exception ("Not found"). strScope = "\\" + strServer + "\root\CIMV2" searcher = New ManagementObjectSearcher(strScope, "SELECT * FROM Win32_OperatingSystem") For Each mo In searcher.Get strOSName = mo("Name") strOSVersion = mo("Version") strOSArchitecture = mo("Architecture") strStatus = mo("Status") strLastBoot = mo("LastBootUpTime") Next Ignore the for each loop, I don't think it has anything to do with the field missing. The documentation says that the field ought to exist and is a String: http://msdn.microsoft.com/en-us/library/aa394239(VS.85).aspx Any ideas?

    Read the article

  • iPhone: Application Install Fails With "Invalid Signer" Error

    - by Scot
    The iPhone is attached to a Mac runnning the latest iTunes version and I am 100% sure that her UDID is in the provisioning file. Her iPhone has not been jailbroken and we even restored it to factory settings. I am having trouble installing our development build on this one iPhone. The error is: the application "[Application Name]" was not installed on the iPhone "iPhone" because the signer is not valid I am 100% sure that the UDID is accurately entered in the provisioning file and that they correctly copied the right provision file/build combo. This same combo has been successfully installed on over a dozen iphones. We have been able to install this on some devices with no problems. Edit: From comments to an answer: We can install it on 100 iphone with our account. We have about 40 iphones in this provisioning profile and it works on 38 of them.

    Read the article

  • How to marshal int* in C#?

    - by MartyIX
    Hi, I would like to call this method in unmanaged library: void __stdcall GetConstraints( unsigned int* puiMaxWidth, unsigned int* puiMaxHeight, unsigned int* puiMaxBoxes ); My solution: Delegate definition: [UnmanagedFunctionPointer(CallingConvention.StdCall)] private delegate void GetConstraintsDel(UIntPtr puiMaxWidth, UIntPtr puiMaxHeight, UIntPtr puiMaxBoxes); The call of the method: // PLUGIN NAME GetConstraintsDel getConstraints = (GetConstraintsDel)Marshal.GetDelegateForFunctionPointer(pAddressOfFunctionToCall, typeof(GetConstraintsDel)); uint maxWidth, maxHeight, maxBoxes; unsafe { UIntPtr a = new UIntPtr(&maxWidth); UIntPtr b = new UIntPtr(&maxHeight); UIntPtr c = new UIntPtr(&maxBoxes); getConstraints(a, b, c); } It works but I have to allow "unsafe" flag. Is there a solution without unsafe code? Or is this solution ok? I don't quite understand the implications of setting the project with unsafe flag. Thanks for help!

    Read the article

  • How do I use beta test Perl modules from test Perl scripts?

    - by DVK
    If my Perl code has a production code location and "beta" code location (e.g. production Perl code us in /usr/code/scripts, BETA Perl code is in /usr/code/beta/scripts; production Perl libraries are in /usr/code/lib/perl and BETA versions of those libraries are in /usr/code/beta/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and BETA location. To clarify, to promote any code (library or script) from BETA to production, the ONLY thing which needs to happen is literally issuing cp command from BETA to prod location - both the file name AND file contents must remain identical. BETA versions of scripts must call other BETA scripts and BETA libraries (if exist) or production libraries (if BETA libraries do not exist) The code paths must be the same between BETA and production with the exception of base directory (/usr/code/ vs /usr/code/beta/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

< Previous Page | 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190  | Next Page >