Search Results

Search found 17716 results on 709 pages for 'bad pool header'.

Page 84/709 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • I have only two languages on my resume - how bad is this?

    - by Karl
    Hi there! I have a question that can be best answered here, given the vast experience some of you guys have! I am going to finish my bachelor's degree in CS and let's face it, I am just comfortable with C++ and Python. C++ - I have no experience to show for and I can't quote the C++ standard like some of the guys on SO do but yet I am comfortable with the language basics and the stuff that mostly matters. With Python, I have demonstrated work experience with a good company, so I can safely put that. I have never touched C, though I have been meaning to do it now. So I cannot write C on my resume because I have not done it ever. Sure I can finish K & R and get a sense of the language in a month, but I don't feel like writing it cause that would be being unfaithful to myself. So the big question is, are two languages on a a resume considered OK or that is usually a bad sign? Most resumes I have seen mention lots of languages and hence my question. Under the language section of my resume, I just mention: C++ and Python and that kinda looks empty! What are your views on this and what do you feel about such a situation? PS: I really don't want to write every single library or API I am familiar with. Or should I?

    Read the article

  • My software is hosted on a "bad" website. Can I do anything about it?

    - by Abluescarab
    The software I've created is hosted on what you could call a "bad" website. It's hard to explain, so I'll just provide an example. I've made a free password generator. This, along with most of my other FREE software, is available on this website. This is their description of my software: Platform: 7/7 x64/Windows 2K/XP/2003/Vista Size: 61.6 Mb License: Trial File Type: .7z Last Updated: June 4th, 2011, 15:38 UTC Avarage Download Speed: 6226 Kb/s Last Week Downloads: 476 Toatal Downloads: 24908 Not only is the size completely skewed, it is not trial software, it's free software. The thing is that it's not the description I'm worried about--it's the download links. The website is a scam website. They apparently link to "cracks" and "keygens", but not only is that in itself illegal, they actually link to fake download websites that give you viruses and charge your credit card. Just to list things that are wrong with this website: they claim all software is paid software then offer downloads for keygens and cracks; they fake all details about the program and any program reviews and ratings; they and the downloads site they link to are probably run by the same person, so they make money off of these lies. I'm only a teenager with no means to pursue legal action. This means that, unfortunately, I can't do anything that will actually get results. I'd like my software to only be downloaded off my personal website. I have links to four legitimate locations to download my software and that's it. Essentially, is there anything I can do about this? As I said above, I can't pursue legal action, but is there some way I can discourage traffic to that website by blacklisting it or something? Can I make a claim on MY website to only download my software from the links I provide? Or should I just pay no mind? Because, honestly, it's a bit of a ways back in Google results. Thank you ahead of time.

    Read the article

  • What are the risks of installing a "bad quality" package?

    - by ændrük
    When I try to install sonic-visualiser_1.9cc-1_amd64.deb via the Software Center the following warning message is displayed: The package is of bad quality The installation of a package which violates the quality standards isn't allowed. This could cause serious problems on your computer. Please contact the person or organisation who provided this package file and include the details beneath. Lintian check results for /home/ak/Downloads/sonic-visualiser_1.9cc-1_amd64.deb: Use of uninitialized value $ENV{"HOME"} in concatenation (.) or string at /usr/bin/lintian line 108. E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/bin/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/bin/sonic-visualiser 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/applications/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/applications/sonic-visualiser.desktop 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/sonic-visualiser/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/sonic-visualiser/CHANGELOG 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/sonic-visualiser/COPYING 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/sonic-visualiser/README 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/mimelnk/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/mimelnk/application/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/mimelnk/application/x-sonicvisualiser-layer.desktop 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/mimelnk/application/x-sonicvisualiser.desktop 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/pixmaps/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/pixmaps/sv-icon-light.svg 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/pixmaps/sv-icon.svg 1000/1000 I understand that this means the package doesn't meet Debian policy and I know how to override the warning and install the package anyway. What are the risks of doing so?

    Read the article

  • Generating Deep Arrays: Shallow to Deep, Deep to Shallow or Bad idea?

    - by MobyD
    I'm working on an array structure that will be used as the data source for a report template in a web app. The data comes from relatively complex SQL queries that return one or many rows as one dimensional associative arrays. In the case of many, they are turned into two dimensional indexed array. The data is complex and in some cases there is a lot of it. To save trips to the database (which are extremely expensive in this scenario) I'm attempting to get all of the basic arrays (1 and 2 dimension raw database data) and put them, conditionally, into a single, five level deep array. Organizing the data in PHP seems like a better idea than by using where statements in the SQL. Array Structure Array of years( year => array of types( types => array of information( total => value, table => array of data( index => db array ) ) ) ) My first question is, is this a bad idea. Are arrays like this appropriate for this situation? If this would work, how should I go about populating it? My initial thought was shallow to deep, but the more I work on this, the more I realize that it'd be very difficult to abstract out the conditionals that determine where each item goes in the array. So it seems that starting from the most deeply nested data may be the approach I should take. If this is array abuse, what alternatives exist?

    Read the article

  • When creating an library published on CodePlex, how "bad" would it be for the unit-test projects to rely on commercial products?

    - by Lasse V. Karlsen
    I have started a project on CodePlex for a WebDAV server implementation for .NET, so that I can host a WebDAV server in my own programs. This is both a learning/research project (WebDAV + server portion) as well as a project I think I can have much fun with, both in terms of making it and using it. However, I see a need to do mocking of types here in order to unit-testing properly. For instance, I will be relying on HttpListener for the web server portion of the WebDAV server, and since this type has no interface, and is sealed, I cannot easily make mocks or stubs out of it. Unless I use something like TypeMock. So if I used TypeMock in the unit-test projects on this library, how bad would this be for potential users? The projects are made in C# 3.5 for .NET 3.5 and 4.0, and the project files was created with Visual Studio 2010 Professional. The actual class libraries you would end up referencing in your software would of course not be encumbered with anything remotely like this, only the unit-test libraries. What's your thoughts on this? As an example, I have in my old code-base, which is private, the ability to just initiate a WebDAV server with just this: var server = new WebDAVServer(); This constructs, and owns, a HttpListener instance internally, and I would like to verify through unit-tests that if I dispose of this server object, the internal listener is disposed of. If, on the other hand, I use the overload where I hand it a listener object, this object should not be disposed of. Short of exposing the internal listener object to the outside world, something I'm a bit loath to do, how can I in a good way ensure that the object was disposed of? With TypeMock I can mock away parts of this object even though it isn't accessed through interfaces. The alternative would be for me to wrap everything in wrapper classes, where I have complete control.

    Read the article

  • Missing Package: header, Problem with MergeList, The package lists or status file could not be parsed or opened

    - by Inbar Rose
    THIS IS NOT A DUPLICATE OF SIMILAR QUESTIONS (like this) I just had to write that first, there are tons of questions similar to this, all of them have the same redirect to an answer that does not solve my problem, because I don't have the same problem, just the same symptom. I write tests for my companies application. One of these tests tries to upgrade the application from a previous version to a new version to make sure nothing breaks. When I am installing an old version of the application, some weird stuff starts to happen. Sometimes everything goes Okay, and nothing is wrong, other times when trying to install I get this message (company app name censored): E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/XXX-amd64_Packages E: The package lists or status file could not be parsed or opened. Using the solutions provided in the questions similar to this one (like this). Do not help, and the problem keeps repeating itself once it happens the first time. This has led me to believe something is wrong on the apt server where the package is being created, but searching these errors yields no information on anything beyond the "fix" suggested in the question I linked, the only other source of information I could find also did not help (here): So I am asking for information; What is the actual problem? What causes the problem? What can fix the problem? I hope this question is in good format, if there is a problem, or missing information I can move to chat.

    Read the article

  • How to tell your boss that his programming style is really bad?

    - by Roflcoptr
    I'm a student and in my spare time I'm working for a big enterprise as Java developer. The job is good, but the problem is, my boss is writing very strange code. I don't want to complain, but some issues are in my opinion really strange. For example: he doesn't know any booleans. All boolean conditions are Strings called "YesOrNo" and then in the condition he uses if (YesOrNo == "Yes") there are a lot of very strange characters in method names and variables like é õ ô or è all loops are infinite loops in the style of for(;;). Then at the end of the loop the condition is tested and if the conditions is fulfilled break; is called. I don't now if I should tell him that I think this isn't a good practice, since he is my boss and decides how and what to do. On the other hand some of this examples are really very weird. Any hints how to cope with? And is this only me who thinks that's bad style?

    Read the article

  • How to capture a Header or Trailer Count Value in a Flat File and Assign to a Variable

    - by Compudicted
    Recently I had several questions concerning how to process files that carry a header and trailer in them. Typically those files are a product of data extract from non Microsoft products e.g. Oracle database encompassing various tables data where every row starts with an identifier. For example such a file data record could look like: HDR,INTF_01,OUT,TEST,3/9/2011 11:23 B1,121156789,DATA TEST DATA,2011-03-09 10:00:00,Y,TEST 18 10:00:44,2011-07-18 10:00:44,Y B2,TEST DATA,2011-03-18 10:00:44,Y B3,LEG 1 TEST DATA,TRAN TEST,N B4,LEG 2 TEST DATA,TRAN TEST,Y FTR,4,TEST END,3/9/2011 11:27 A developer is normally able to break the records using a Conditional Split Transformation component by employing an expression similar to Output1 -- SUBSTRING(Output1,1,2) == "B1" and so on, but often a verification is required after this step to check if the number of data records read corresponds to the number specified in the trailer record of the file. This portion sometimes stumbles some people so I decided to share what I came up with. As an aside, I want to mention that the approach I use is slightly more portable than some others I saw because I use a separate DFT that can be copied and pasted into a new SSIS package designer surface or re-used within the same package again and it can survive several trailer/footer records (!). See how a ready DFT can look: The first step is to create a Flat File Connection Manager and make sure you get the row split into columns like this: After you are done with the Flat File connection, move onto adding an aggregate which is in use to simply assign a value to a variable (here the aggregate is used to handle the possibility of multiple footers/headers): The next step is adding a Script Transformation as destination that requires very little coding. First, some variable setup: and finally the code: As you can see it is important to place your code into the appropriate routine in the script, otherwise the end result may not be as expected. As the last step you would use the regular Script Component to compare the variable value obtained from the DFT above to a package variable value obtained say via a Row Count component to determine if the file being processed has the right number of rows.

    Read the article

  • What is logical cohesion, and why is it bad or undesirable?

    - by Matt Fenwick
    From the c2wiki page on coupling & cohesion: Cohesion (interdependency within module) strength/level names : (from worse to better, high cohesion is good) Coincidental Cohesion : (Worst) Module elements are unrelated Logical Cohesion : Elements perform similar activities as selected from outside module, i.e. by a flag that selects operation to perform (see also CommandObject). i.e. body of function is one huge if-else/switch on operation flag Temporal Cohesion : operations related only by general time performed (i.e. initialization() or FatalErrorShutdown?()) Procedural Cohesion : Elements involved in different but sequential activities, each on different data (usually could be trivially split into multiple modules along linear sequence boundaries) Communicational Cohesion : unrelated operations except need same data or input Sequential Cohesion : operations on same data in significant order; output from one function is input to next (pipeline) Informational Cohesion: a module performs a number of actions, each with its own entry point, with independent code for each action, all performed on the same data structure. Essentially an implementation of an abstract data type. i.e. define structure of sales_region_table and its operators: init_table(), update_table(), print_table() Functional Cohesion : all elements contribute to a single, well-defined task, i.e. a function that performs exactly one operation get_engine_temperature(), add_sales_tax() (emphasis mine). I don't fully understand the definition of logical cohesion. My questions are: what is logical cohesion? Why does it get such a bad rap (2nd worst kind of cohesion)?

    Read the article

  • Is it bad to be the only person supporting software you have developed?

    - by trpt4him
    My employer has a need for a web-based application to manage and share data within the department, with approximately 50-75 possible users. I feel I have the ability to write it for them. I would likely use Python/Django with a MySQL database, so it would be open source. However, I'm the only IT person in my department (our larger organization has a separate IT support staff with which I often work, but not for web development). I want to develop this application, but if I leave in 1-2 years, and someone else has to come in after me and support it, will this be seen as a bad decision? This is assuming all the obvious points -- I will write documentation, I will comment my code, and I will strive to follow good application design principles. But will that be enough? In principle, is it acceptable for one person to develop and support an entire web application? Is this a "do first, then show and ask" kind of situation, or should I be certain it will be adopted by everyone involved first?

    Read the article

  • How bad would be to focus on iOS/Android development for an indie developer?

    - by kender
    After some time developing games for others I'm thinking of moving towards my own productions. My background is 10+ years of software development, with last 2 years spent on the iOS development (Objective-C and CoronaSDK). With my current experience in Corona I can quickly develop for iOS and Android systems. And this is something that I'm probably gonna do with several of the game ideas I have, at least for the prototype part. But - I'm wondering if it's not a bad idea to focus on those 2 systems only. After all there are other mobile platforms, there are PCs, Macs and Linux boxes... All of them having gamers using them. I was wondering if it wasn't a good idea to try some other SDK, giving me more flexibility when it comes to platform-independance. There's Unity3D (I think I can develop a 2D game in it though), there's MoAI from what I checked. I see a few options, not sure which one is best as I have little experience in this field (publishing own games): Stick with CoronaSDK for the whole time, release for iOS and Android platforms, screw other mobile devices and PCs, Use Corona for prototyping, then when the idea goes more into the "production" phase rewrite it in MoAI or Unity3D for more platforms support, Start with one of those 2 SDKs right now (which means the prototype phase will be delayed a bit, but after that I can jump right into real coding). Any clues here, what to do?

    Read the article

  • Is looking for code examples constantly a sign of a bad developer?

    - by Newly Insecure
    I am a comp sci student with several years of experience in C and C++, and for the last few years I've been constantly working with Java/Objective C doing app dev and now I have switched to web dev and am mainly focused on ruby on rails and I came to the realization that (as with app dev, really) I reference other code wayyyy too much. I constantly google functionality for lots of things I imagine I should be able to do from scratch and it's really cracked my confidence a bit. Basic fundamentals are not an issue, I hate to use this as an example but I can run through javabat in both java/python at a sprint - obviously not an accomplishment and but what I mean to say is I have a strong base for the fundamentals I think? I know what I need to use typically but reference syntax constantly. Would love some advice and input on this, as it has been holding me back pretty solidly in terms of looking for work in this field even though I'm finishing my degree. My main reason for asking is not really about employment, but more that I don't want to be the only guy at a hackathon not hammering out nonstop code and sitting there with 20 google/github tabs open, and I have refrained from attending any due to a slight lack of confidence... Is a person a bad developer by constantly looking to code examples for moderate to complex tasks?

    Read the article

  • Is constantly looking for code examples a sign of a bad developer?

    - by Newly Insecure
    I am a CS student with several years of experience in C and C++, and for the last few years I've been constantly working with Java/Objective C doing app development and now I have switched to web development and am mainly focused on ruby on rails and I came to the realization that (as with app development , really) I reference other code way too much. I constantly Google functionality for lots of things I imagine I should be able to do from scratch and it's really cracked my confidence a bit. Basic fundamentals are not an issue, I hate to use this as an example but I can run through javabat in both java/python at a sprint - obviously not an accomplishment and but what I mean to say is I have a strong base for the fundamentals I think? I know what I need to use typically but reference syntax constantly. Would love some advice and input on this, as it has been holding me back pretty solidly in terms of looking for work in this field even though I'm finishing my degree. My main reason for asking is not really about employment, but more that I don't want to be the only guy at a hackathon not hammering out nonstop code and sitting there with 20 Google/github tabs open, and I have refrained from attending any due to a slight lack of confidence... Is a person a bad developer by constantly looking to code examples for moderate to complex tasks?

    Read the article

  • Do you feel bad when you have to learn new things?

    - by tactoth
    New thing is not always cool. I see many people say they are very bored by doing the similar things day after day. For me it's the opposite - I'm always learning something new. During the last one and a harf year, nearly every two months I need to do lots of researches on a totally new topic: RTMP, MP4, SIP, VNC, Smooth streaming, ..., I have to read lots of specifications, download tones of open source projects to understand concepts, and turn them into my runnable code. And it was so bad! My brain has never been very sure and very familiar with anything, and when it's close to be sure and familiar, it'll have to switch to next thing. I kind of envy people who build upper level applications because they can be very focusing, and their knowledge set includes most things their job requires. Everything is quite measurable, direct and straightforward. Have you ever had the similar feeling? I'm thinking of asking my boss to assign me some other piece of work so that I work like moving forward on a broad road instead of figuring out a way in the dark, I think it'll be more relaxing, any suggestion?

    Read the article

  • On installing nvidia drivers on 12.10 I get "Bad return status for module build on kernel: 3.5.0-19-generic (x86_64)"

    - by james
    New Ubuntu user - just recently made the mistake of trying a different nvidia driver. I'd managed to get the last (nvidia-current) one working through software sources a few weeks ago. The other day I tried to cross over to nvidia-experimental-310 and this produced a system error. Swapping back and forth between proprietary drivers now always causes an error and I can't get any of them to work. Installing through the terminal I get this error message every time: Building initial module for 3.5.0-19-generic Error! Bad return status for module build on kernel: 3.5.0-19-generic (x86_64) Consult /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log for more information On rebooting, I end up with the crappy screen resolution and the thick black border around the screen. I use gksudo software-properties-gtk to bring up sources, where I can change back to the nouveau driver, which restores my screen. After that I can't find /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log so I can't tell you what's inside. Any ideas what might be preventing the nvidia driver from installing? SOLUTION FOUND Okay - so I have a workaround. This is what has worked: Upgrade to kernel 3.7.0 as detailed here upgrade to latest version of the nvidia drivers as detailed here No idea what was happening with kernel 3.5.0-19, but this seems to be better. A little slower maybe on boot, but after days of messing around it's nice to have something that works.

    Read the article

  • Video quality too bad while playing (any) videos in Intel GM965/GL960 Integrated Graphics Controller Ubuntu 12.04

    - by Sukhdev
    I have searched blogs and forums, installed several drivers, but can't find a solution that can provide equivalent video quality as that of Windows 7. Kindly help. Video quality specially color is too bad while playing with any media player. Configuration details are: Ubuntu - 12.04 Intel Corporation Mobile GM965/GL960 Integrated The results of the following commands are a) sudo lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (primary) (rev 0c) b) find /dev -group video /dev/fb0 /dev/dri/card0 /dev/dri/controlD64 /dev/agpgart c) glxinfo | grep -i vendor server glx vendor string: SGI client glx vendor string: ATI OpenGL vendor string: Tungsten Graphics, Inc d) sudo lshw -C video *-display:0 description: VGA compatible controller product: Mobile GM965/GL960 Integrated Graphics Controller (primary) vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 0c width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:44 memory:fea00000-feafffff memory:e0000000-efffffff ioport:efe8(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile GM965/GL960 Integrated Graphics Controller (secondary) vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 0c width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:feb00000-febfffff I have spent days installing various drivers, and then un-installing but can't come up with a solution. Please help.

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How do I bind one TabControl to the TabItems of another TabControl in WPF?

    - by joebeazelman
    I've defined the following TabControl called TabControl1: <TabControl> <TabItem Header="Cheese"> The Cheese Tab </TabItem> <TabItem Header="Pepperoni"> The Pepperoni Tab </TabItem> <TabItem Header="Mushrooms"> The Mushrooms Tab </TabItem> </TabControl> I've defined another TabControl, TabControl2 which is dynamically loaded from an add-in or plugin: <TabControl> <TabItem Header="Anchovies"> The AnchoviesTab </TabItem> <TabItem Header="Jalepenos"> The Jalepenos Tab </TabItem> <TabItem Header="Rattle Snake"> The Rattle Snake Tab </TabItem> </TabControl> After TabControl1 binds to TabControl2 after the "Cheese" item, TabControl1 should look like this: <TabControl> <TabItem Header="Cheese"> The Cheese Tab </TabItem> <TabItem Header="Anchovies"> The AnchoviesTab </TabItem> <TabItem Header="Jalepenos"> The Jalepenos Tab </TabItem> <TabItem Header="Rattle Snake"> The Rattle Snake Tab </TabItem> <TabItem Header="Pepperoni"> The Pepperoni Tab </TabItem> <TabItem Header="Mushrooms"> The Mushrooms Tab </TabItem> </TabControl>

    Read the article

  • Filling a byte array in Java

    - by Corleone
    Hey all! For part of a project I'm working on I am implementing a RTPpacket where I have to fill the header array of byte with RTP header fields. //size of the RTP header: static int HEADER_SIZE = 12; // bytes //Fields that compose the RTP header public int Version; // 2 bits public int Padding; // 1 bit public int Extension; // 1 bit public int CC; // 4 bits public int Marker; // 1 bit public int PayloadType; // 7 bits public int SequenceNumber; // 16 bits public int TimeStamp; // 32 bits public int Ssrc; // 32 bits //Bitstream of the RTP header public byte[] header = new byte[ HEADER_SIZE ]; This was my approach: /* * bits 0-1: Version * bit 2: Padding * bit 3: Extension * bits 4-7: CC */ header[0] = new Integer( (Version << 6)|(Padding << 5)|(Extension << 6)|CC ).byteValue(); /* * bit 0: Marker * bits 1-7: PayloadType */ header[1] = new Integer( (Marker << 7)|PayloadType ).byteValue(); /* SequenceNumber takes 2 bytes = 16 bits */ header[2] = new Integer( SequenceNumber >> 8 ).byteValue(); header[3] = new Integer( SequenceNumber ).byteValue(); /* TimeStamp takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[7-i] = new Integer( TimeStamp >> (8*i) ).byteValue(); /* Ssrc takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[11-i] = new Integer( Ssrc >> (8*i) ).byteValue(); Any other, maybe 'better' ways to do this?

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

  • ASP.NET Web Forms is bad, or what am I missing?

    - by iveqy
    Being a PHP guy myself I recently had to write a spider to an asp.net site. I was really surprised by the different approach to ajax and form-handling. For example, in the PHP sites I've worked with, a deletion of a database entry would be something like: GET delete.php?id=&confirm=yes and get a "success" back in some form (in the ajax case, probably a json reply). In this asp.net application you would instead post a form, including all inputs on the page, with a huge __VIEWSTATE and __EVENTVALIDATION. This would be more than 10 times as big as above. The reply would be the complete side again, with a footer containing some structured data for javascript to parse and display the result. Again, the whole page is sent, and then throwed away(?) since it's already displayed. Why not just send the footer with the data to parse (it's not json nor xml but a | separated list). I really can't see why you would design a system that way. Usually you've a fast client, and a somewhat fast server but a really slow connection. Why not keep the datatransfer to a minimum? Why those huge __VIEWSTATE and __EVENTVALIDATION? It seems that everything is done way to chatty and way to complicated. I really can't see the point and that usually means that I'm missing something. So please tell me, what are the reasons for this design and what benefits (and weaknesses) does it have? (Yes I know that __VIEWSTATE is used to tell what type of form-konfiguration should be sent back to the server. But WHY is this needed?) Please keep this discussion strictly technical and avoid flamewars. Update: Please excuse the somewhat rantish question. I tried to explain my view to be able to get a better answer. I am not saying that asp.net is bad, I am saying that I don't understand the meaning of those concepts. Usually that means that I've things to learn instead of the concepts beeing wrong. I appreciate the explanations about that "you don't have to do this way in asp.net", I'll read up on MVC and other .net technologies. However, there most be a reason for this site (the one I referred to) to be written the way it is. It's written by professionals for a big organisation with far more experience than what I've. Any explanation about their (possible) design choice would be welcome.

    Read the article

  • How to explain bad software to non-technical people?

    - by mtutty
    In discussing software development with non-technical people (customers, business owners, project sponsors, etc.), I often resort to analogies and metaphors. It's relatively easy and effective to use a "house" or other metaphor for describing the size and complexity of new development. However, we often inherit someone else's code or data, and this approach doesn't seem to hold up as well when trying to explain why we're gutting something that already seems to work. Of course we can point to cycle time and cost to be saved in the future but this generally means nothing to business folks. I know doctors can say "just take this pill," but I'm not sure that software devs have the same authority. Ideas? EDIT: Let me add a bit to the discussion. The specific project I'm talking about has customers that don't realize (or care) about specific aspects of the system we're retiring (i.e., they think it was just fine): The system would save a NEW RECORD every time someone updated a field The system contained tables for reference data. These tables had new records added every day, even though they were duplicates of previous records. And there was no way to tie the reference data used for a particular case at the time it was closed. This is like 99% of the data in the old system. The field NAMES also have spaces, apostrophes and other inappropriate characters in them, making everything harder to work with. In addition to the incredible amount of duplicate data, they have around 1000 XLS files with data they want added to the system. Previously, they would do a spreadsheet for each case in the database, IN ADDITION TO what they typed into the database. Getting rid of this old, unneeded information and piping in the XLS data comprises about 80% of the total project effort, and was not something we could accurately predict. I'm trying to find a concrete way to describe how bad this thing was, mostly so that the customer will understand why the migration process has been so time-consuming. The actual coding was done pretty quickly and the new system works fine, but without the old data they won't be happy. Sorry to get into the weeds, but most of the answers I've seen so far are pretty basic scope/schedule/cost things. I've been doing this for 15 years, so this really is more of a reflective, philosophical question - but without some of the details it can be difficult to really appreciate the awful beauty of this problem.

    Read the article

  • Getting wireless to work on Ubuntu 11.10 (on a mac machine)

    - by yayu
    I have a 2008 white 13.3 inch macbook that now solely runs ubuntu. However, I cannot get wifi or cable to work on it. Regarding the cable, it atleast tries to connect but eventually disconnects from the wired network. Here is the output for lspci (pastebin) I tried installing b43-fwcutter and firmware-b43 but get errors. (I loaded them from a thumb drive and tried sudo dpkg -i). I cannot understand these instructions from the documentation, as I cannot locate the pool directory. b43-fwcutter is located on the Ubuntu install media under ../pool/main/b/b43-fwcutter/ and patch is located under ../pool/main/p/patch/ or both in the official repositories online. Note: In some versions (10.04 and 11.04 at least) there is not a /pool/main/p/patch/ If this file is missing then you don't need it. In this case you only need to install /pool/main/b/b43-fwcutter by following the instructions below.

    Read the article

  • Pooling (Singleton) Objects Against Connection Pools

    - by kolossus
    Given the following scenario A canned enterprise application that maintains its own connection pool A homegrown client application to the enterprise app. This app is built using Spring framework, with the DAO pattern While I may have a simplistic view of this, I think the following line of thinking is sound: Having a fixed pool of DAO objects, holding on to connection objects from the pool. Clearly, the pool should be capable of scaling up (or down depending on need) and the connection objects must outnumber the DAOs by a healthy margin. Good Instantiating brand new DAOs for every request to access the enterprise app; each DAO will attempt to grab a connection from the pool and release it when it's done. Bad Since these are service objects, there will be no (mutable) state held by the objects (reduced risk of concurrency issues) I also think that with #1, there should be little to no resource contention, while in #2, there'll almost always be a DAO waiting to be serviced. Is my thinking correct and what could go wrong?

    Read the article

  • WCF fails to deserialize correct(?) response message security headers (Security header is empty)

    - by Soeteman
    I'm communicating with an OC4J webservice, using a WCF client. The client is configured as follows: <basicHttpBinding> <binding name="MyBinding"> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="None" proxyCredentialType="None" realm=""/> <message clientCredentialType="UserName" algorithmSuite="Default"/> </security> </binding> My clientcode looks as follows: ServicePointManager.CertificatePolicy = new AcceptAllCertificatePolicy(); string username = ConfigurationManager.AppSettings["user"]; string password = ConfigurationManager.AppSettings["pass"]; // client instance maken WebserviceClient client = new WebserviceClient(); client.Endpoint.Binding = new BasicHttpBinding("MyBinding"); // credentials toevoegen client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; //uitvoeren request var response = client.Ping(); I've altered the CertificatePolicy to accept all certificates, because I need to insert Charles (ssl proxy) in between client and server to intercept the actual Xml that is sent across te wire. My request looks as follows: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2010-04-01T09:47:01.161Z</u:Created> <u:Expires>2010-04-01T09:52:01.161Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-9b39760f-d504-4e53-908d-6125a1827aea-21"> <o:Username>user</o:Username> <o:Password o:Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username- token-profile-1.0#PasswordText">pass</o:Password> </o:UsernameToken> </o:Security> </s:Header> <s:Body> <getPrdStatus xmlns="http://mynamespace.org/wsdl"> <request xmlns="" xmlns:a="http://mynamespace.org/wsdl" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <a:IsgrStsRequestTypeUser> <a:prdCode>LEPTO</a:prdCode> <a:sequenceNumber i:nil="true" /> <a:productionType i:nil="true" /> <a:statusDate>2010-04-01T11:47:01.1617641+02:00</a:statusDate> <a:ubn>123456</a:ubn> <a:animalSpeciesCode>RU</a:animalSpeciesCode> </a:IsgrStsRequestTypeUser> </request> </getPrdStatus> </s:Body> </s:Envelope> In return, I receive the following response: <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns0="http://mynamespace.org/wsdl"> <env:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" env:mustUnderstand="1" /> </env:Header> <env:Body> <ns0:getPrdStatusResponse> <result> <ns0:IsgrStsResponseTypeUser> <ns0:prdCode>LEPTO</ns0:prdCode> <ns0:color>green</ns0:color> <ns0:stsCode>LEP1</ns0:stsCode> <ns0:sequenceNumber xsi:nil="1" /> <ns0:productionType xsi:nil="1" /> <ns0:IAndRCode>00</ns0:IAndRCode> <ns0:statusDate>2010-04-01T00:00:00.000+02:00</ns0:statusDate> <ns0:description>Gecertificeerd vrij</ns0:description> <ns0:ubn>123456</ns0:ubn> <ns0:animalSpeciesCode>RU</ns0:animalSpeciesCode> <ns0:name>gecertificeerd vrij</ns0:name> <ns0:ranking>17</ns0:ranking> </ns0:IsgrStsResponseTypeUser> </result> </ns0:getPrdStatusResponse> </env:Body> </env:Envelope> Why can't WCF deserialize this response header? I'm getting a "Security header is empty" exception: Server stack trace: at System.ServiceModel.Security.ReceiveSecurityHeader.Process(TimeSpan timeout) at System.ServiceModel.Security.TransportSecurityProtocol.VerifyIncomingMessageCore(Message& message, TimeSpan timeout) at System.ServiceModel.Security.TransportSecurityProtocol.VerifyIncomingMessage(Message& message, TimeSpan timeout) at System.ServiceModel.Security.SecurityProtocol.VerifyIncomingMessage(Message& message, TimeSpan timeout, SecurityProtocolCorrelationState[] correlationStates) at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.ProcessReply(Message reply, SecurityProtocolCorrelationState correlationState, TimeSpan timeout) at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Who knows what is going on here? I've already tried Rick Strahl's suggestion and removed the timestamp from the request header. Any help greatly appreciated!

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >