Search Results

Search found 19157 results on 767 pages for 'shared folder'.

Page 627/767 | < Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >

  • Using multithreading for loop

    - by annelie
    Hello, I'm new to threading and want to do something similar to this question: http://stackoverflow.com/questions/100291/speed-up-loop-using-multithreading-in-c-question However, I'm not sure if that solution is the best one for me as I want them to keep running and never finish. (I'm also using .net 3.5 rather than 2.0 as for that question.) I want to do something like this: foreach (Agent agent in AgentList) { // I want to start a new thread for each of these agent.DoProcessLoop(); } --- public void DoProcessLoop() { while (true) { // do the processing // this is things like check folder for new files, update database // if new files found } } Would a ThreadPool be the best solution or is there something that suits this better? Thanks, Annelie

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Where will log4net create this log file?

    - by Blankman
    When I set the file value to 'logs\log-file.txt' where exactly will it create this folder? in the /bin directory? My web.config looks like: <log4net> <appender name="FileAppender" type="log4net.Appender.FileAppender"> <file value="logs\log-file.txt" /> <appendToFile value="true" /> <lockingModel type="log4net.Appender.FileAppender+MinimalLock" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" /> </layout> </appender> </log4net> is this the correct way to log: ILog logger = LogManager.GetLogger(typeof(CCController)); logger.Error("Some Page", ex); // where ex is the exception instance

    Read the article

  • aspnet_reqsql not working at all

    - by user252160
    I would like to create the ASP.NET User database template on a database of my own, because I'd like to fully untegrate the user system with the rest of my DB. As I've read, i needed to use the aspnet_regsql tool. I put all the options (because my database is running on SQLEXPRESS and is in an mdf file in my project's folder). the program starts and seemingly runs without any errors, however, when I open the database after that, not tables or stored procedures have been added. One more thing: I did one more test. I intentionally gave the -d option a wrong mdf file address, and surprisingly, the program "finished" correctly, yet no file was crated or modified whatsoever.

    Read the article

  • input file cannot be found

    - by Eric Smith
    I am just messing around with reading input files with java until I got stumped at the most basic of steps... finding the input file! The input.txt file is in the same directory as my class file that is calling it yet eclipse still gives me an error that it cant be found: "Exception in thread "main" java.lang.Error: Unresolved compilation problem: Unhandled exception type FileNotFoundException" My code: package pa; import java.util.Scanner; public class Project { public static void main(String[] args) { java.io.File file = new java.io.File("input.txt"); System.out.println(file.getAbsolutePath()); Scanner input = new Scanner(file); } } input.txt is in the same package, same folder and everything. I'm confused :(

    Read the article

  • Flex video player Seeking with RTMP?

    - by Aswath
    Am working in flex video player with RTMP. My Question is.. How to skip the video file to the middle of a video without having to download the whole file using RTMP. I have some basic questions in flex video player with RTMP. Where i want to put the Video file(FLV). Red5 server location or any other folder. Where i want to put the flex project out put file Red5 server or any other server like XAMPP. How Can i skip the frames in flex using RTMP(*red5*).. Thanks in Advance... Aswath

    Read the article

  • where are the log files saved in axis2 webservice

    - by KItis
    i have put log4j.properties file into WEB-INF/classes folder in my axis 2 webservice. now i can see logs been printed on console. but i have also put file appender. but i can not find the log file anywhere. could someone help me to find a solution for this problem. log4j.rootLogger=DEBUG, CA, FA #Console Appender log4j.appender.CA=org.apache.log4j.ConsoleAppender log4j.appender.CA.layout=org.apache.log4j.PatternLayout log4j.appender.CA.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n #File Appender log4j.appender.FA=org.apache.log4j.FileAppender log4j.appender.FA.File=ws.log log4j.appender.FA.layout=org.apache.log4j.PatternLayout log4j.appender.FA.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n # Set the logger level of File Appender to WARN log4j.appender.FA.Threshold = WARN

    Read the article

  • Shell script to process files

    - by Harish
    I need to write a Shell Script to process a huge folder of nearly 20 levels.I have to process each and every file and check which files contain lines like select insert update When I mean line it should take the line till I find a semicolon in that file. I should get a result like this C:/test.java select * from dual C:/test.java select * from test C:/test1.java select * from tester C:/test1.java select * from dual and so on.Right now I have a script to read all the files #!bin/ksh FILE=<FILEPATH to be traversed> TEMPFILE=<Location of Temp file> cd $FILE for f in `find . ! -type d`; do cat $FILE/addedText.txt>>$TEMPFILE/newFile.txt cat $f>>$TEMPFILE/newFile.txt rm $f cat $TEMPFILE/newFile.txt>>$f rm $TEMPFILE/newFile.txt done I have very little knowledge of awk and sed to proceed further in reading each file and achieve what I want to.Can anyone help me in this

    Read the article

  • Carrierwave upload to a tmp dir before saving to database

    - by user827570
    I'm trying to build a visual editor where users can click an image they are presented with an image upload form once the upload is done I use ajax to return the image and insert it back into the page. But the above method inserts the image straight into the database but I want users to be able to visualize the image before the image is inserted into the database. So I was wondering if the image using carrierwave could be uploaded to a temp location, sent back to the user and then when the user saves the page the image is moved into the permanent location. Here's what I have so far. def edit_image @page = Page.find(1) @page.update_attributes(params[:page]) @page.save return :text => @page.file end But this is what I want to achieve def temp_image #uploads received image to a temp location #returns image to the user end And once the user clicks save def save #moves the file in the temp folder to the permanent location end Cheers

    Read the article

  • Auto generate SWF in Flex SDK

    - by Nick
    I'm building my first website with AS3, and I'm using Flash Builder 4 to create/edit my AS classes. I have two .fla files (preloader.fla and portfolio.fla) which I both published as .swc and loaded them into my ActionScript project in FB4 (build path). When I hit debug, FB4 automatically generates a .SWF in bin-debug folder called Preloader.swf, but in my Preloader.as I have new URLRequest("Portfolio.swf"); and this Portfolio.swf isn't being generated by itself. Now the real question; how can I tell FB4 to automatically create both .SWF files for me? Or isn't that possible, any workaround then? Thanks.

    Read the article

  • Uploadify uploadSettings with scripData does not work

    - by kubilayeksioglu
    Hi everyone, I am sending a file to my Java Servlet via jQuery Uploadify, there are no problems while sending the actual file. But when I try to send some scriptData with file along, to process on Servlet it just does not send anything. Here is the JS code: $("button").click(function(){ $("#uploadify").uploadifySettings('scriptData', {'length':'0.2'}); $('#uploadify').uploadifyUpload(); }); $('#uploadify').uploadify({ 'uploader': 'assets/uploadify/uploadify.swf', 'script': 'upload', 'folder': '/uploads' }); And here is the Servlet code on the server side: out.println(res.getParameter("length")); Only output I get is null, while expecting "0.2". I just cannot get what's wrong and any kind of help will be appreciated. Thanks in advance.

    Read the article

  • Is SSIS able to query flat files from another Windows Server?

    - by atricapilla
    I pretty new SQL Server Integration Server (SSIS) user. Is SSIS able to query data from text files located in another Windows Server? I mean that when SSIS is installed on Windwos Server A, is SSIS able to query data from e.g. one folder containing text files in Windows Server B (under same domain)? I have used only SAP BO Data Integrator ETL tool and it cannot query flat files from another Server: during execution, all files must be located on the Job Server machine that executes the job.

    Read the article

  • wordpress permalinks

    - by codedude
    I set my wordpress permalink structure to /%postname%/ but now when I go to a page other than the home page (for example if I went to somelink.com/about) I lose all javascript references. I think this happens because the links to the js files are no longer right as it is in the imaginary folder "about". This is how the js files are referenced in the header.php file. <script type="text/javascript" src="wp-content/themes/default/js/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="wp-content/themes/default/js/cufon-yui.js"></script> <script type="text/javascript" src="wp-content/themes/default/js/Goudy_Bookletter_1911_400.font.js"></script> <script type="text/javascript"> $(document).ready(function() { Cufon.replace('h1'); Cufon.replace('h3', {textShadow:'0 1px #fff'}); }); </script> Am I doing something wrong?

    Read the article

  • Git subtree workflow

    - by Cedric
    In my current project I'm using an open source forum (https://github.com/vanillaforums/Garden.git). I was planning on doing something like this : git remote add vanilla_remote https://github.com/vanillaforums/Garden.git git checkout -b vanilla vanilla_remote/master git checkout master git read-tree --prefix=vanilla -u vanilla This way I can make change into the vanilla folder (like changing config) and commit it to my master branch and I can also switch into my vanilla branch to fetch updates. My problem is when I try to merge the branch together git checkout vanilla git pull git checkout master git merge --squash -s subtree --no-commit vanilla The problem is that the "update commit" goes on top of my commits and "overwrite" my change. I would rather like to have my commits replay on top of the update. Is there a simple way to do that? I'm not very good in git so maybe this is the wrong approach. Also, I really don't want to mix my history with the vanilla history.

    Read the article

  • ReSharper C# Live Template for Read-Only Dependency Property and Routed Event Boilerplate

    - by Bart Read
    Following on from my previous post, where I shared a Live Template for quickly declaring a normal read-write dependency property and its associated property change event boilerplate, here's an unsurprisingly similar template for creating a read-only dependency property.        #region $PROPNAME$ Read-Only Property and Property Change Routed Event        private static readonly DependencyPropertyKey $PROPNAME$PropertyKey =                                             DependencyProperty.RegisterReadOnly(             "$PROPNAME$", typeof ( $PROPTYPE$ ), typeof ( $DECLARING_TYPE$ ),             new PropertyMetadata( $DEF_VALUE$ , On$PROPNAME$Changed ) );       public static readonly DependencyProperty $PROPNAME$Property =                                           $PROPNAME$PropertyKey.DependencyProperty;        public $PROPTYPE$ $PROPNAME$         {             get { return ( $PROPTYPE$ ) GetValue( $PROPNAME$Property ); }             private set { SetValue( $PROPNAME$PropertyKey, value ); }         }       public static readonly RoutedEvent $PROPNAME$ChangedEvent   =                                           EventManager.RegisterRoutedEvent(           "$PROPNAME$Changed",           RoutingStrategy.$ROUTINGSTRATEGY$,           typeof( RoutedPropertyChangedEventHandler< $PROPTYPE$ > ),           typeof( $DECLARING_TYPE$ ) );       public event RoutedPropertyChangedEventHandler< $PROPTYPE$ > $PROPNAME$Changed       {           add { AddHandler( $PROPNAME$ChangedEvent, value ); }           remove { RemoveHandler( $PROPNAME$ChangedEvent, value ); }       }        private static void On$PROPNAME$Changed(           DependencyObject d, DependencyPropertyChangedEventArgs e)         {             var $DECLARING_TYPE_var$ = d as $DECLARING_TYPE$;            var args = new RoutedPropertyChangedEventArgs< $PROPTYPE$ >(               ( $PROPTYPE$ ) e.OldValue,               ( $PROPTYPE$ ) e.NewValue );           args.RoutedEvent    = $DECLARING_TYPE$.$PROPNAME$ChangedEvent;           $DECLARING_TYPE_var$.RaiseEvent( args );$END$        }        #endregion The only real difference here is the addition of the DependencyPropertyKey, which allows your implementation to set the value of the dependency property without exposing the setter code to consumers of your type. You'll probably find that you create read-only dependency properties much less often than read-write properties, but this should still save you some typing when you do need to do so. Technorati Tags: resharper,live template,c#,dependency property,read-only,routed events,property change,boilerplate,wpf

    Read the article

  • Provocative Tweets From the Dachis Social Business Summit

    - by Mike Stiles
    On June 20, all who follow social business and how social is changing how we do business and internal business structures, gathered in London for the Dachis Social Business Summit. In addition to Oracle SVP Product Development, Reggie Bradford, brands and thought leaders posed some thought-provoking ideas and figures. Here are some of the most oft-tweeted points, and our thoughts that they provoked. Tweet: The winners will be those who use data to improve performance.Thought: Everyone is dwelling on ROI. Why isn’t everyone dwelling on the opportunity to make their product or service better (as if that doesn’t have an effect on ROI)? Big data can improve you…let it. Tweet: High performance hinges on integrated teams that interact with each other.Thought: Team members may work well with each other, but does the team as a whole “get” what other teams are doing? That’s the key to an integrated, companywide workforce. (Internal social platforms can facilitate that by the way). Tweet: Performance improvements come from making the invisible visible.Thought: Many of the factors that drive customer behavior and decisions are invisible. Through social, customers are now showing us what we couldn’t see before…if we’re paying attention. Tweet: Games have continuous feedback, which is why they’re so engaging.  Apply that to business operations.Thought: You think your employees have an obligation to be 100% passionate and engaged at all times about making you richer. Think again. Like customers, they must be motivated. Visible insight that they’re advancing on their goals helps. Tweet: Who can add value to the data?  Data will tend to migrate to where it will be most effective.Thought: Not everybody needs all the data. One team will be able to make sense of, use, and add value to data that may be irrelevant to another team. Like a strategized football play, the data has to get sent to the spot on the field where it’s needed most. Tweet: The sale isn’t the light at the end of the tunnel, it’s the start of a new marketing cycle.Thought: Another reason the ROI question is fundamentally flawed. The sale is not the end of the potential return on investment. After-the-sale service and nurturing begins where the sales “victory” ends. Tweet: A dead sale is one that’s not shared.  People must be incentivized to share.Thought: Guess what, customers now know their value to you as marketers on your behalf. They’ll tell people about your product, but you’ve got to answer, “Why should I?” And you’ve got to answer it with something substantial, not lame trinkets. Tweet: Social user motivations are competition, affection, excellence and curiosity.Thought: Your followers will engage IF; they can get something for doing it, love your culture so much they want you to win, are consistently stunned at the perfection and coolness of your products, or have been stimulated enough to want to know more. Tweet: In Europe, 92% surveyed said they couldn’t care less about brands.Thought: Oh well, so much for loving you or being impressed enough with your products & service that they want you to win. We’ve got a long way to go. Tweet: A complaint is a gift.Thought: Our instinct where complaints are concerned is to a) not listen, b) dismiss the one who complains as a kook, c) make excuses, and d) reassure ourselves with internal group-think that they’re wrong and we’re right. It’s the perfect recipe for how to never, ever grow or get better. In a way, this customer cares more than you do. Tweet: 78% of consumers think peer recommendation is the best form of advertising.  Eventually, engagement is going to eat advertising.Thought: Why is peer recommendation best? Trust. If a friend tells me how great a movie was, I believe him. He has credibility with me. He’s seen it, and he could care less if I buy a ticket. He’s telling me it was awesome because he sincerely believes that it was.  That’s gold. Tweet: 86% of customers are willing to pay more for a better customer experience. Thought: This “how mad can we make our customers without losing them” strategy has to end. The customer experience has actual monetary value, money you’re probably leaving on the table. @mikestilesPhoto: stock.xchng

    Read the article

  • Apache rewriteBase in .htaccess for development subdomains

    - by Das123
    I think I'm missing something and don't think I really understand how rewriteBase works. The problem I have is that I have a production site where the site is in the root directory yet I have my development site in a localhost subdirectory. Eg http://www.sitename.com/ vs http://localhost/sitename/ If I have for example an images folder I want to reference the images from the site root by using the initial slash in the href. Eg Using a relative href (without the initial slash) is not an option. This will work on the production site but the development site is looking for http://localhost/images/imagename.jpg instead of http://localhost/sitename/images/imagename.jpg So I thought all I needed to do was setup the following in my .htaccess file to force the site root to my subdomain within the development environment: Options +FollowSymLinks RewriteEngine On RewriteBase /sitename But this still uses localhost as the site root instead of localhost/sitename. Can anyone please give me some pointers?

    Read the article

  • 7-Eleven Improves the Digital Guest Experience With 10-Minute Application Provisioning

    - by MichaelM-Oracle
    By Vishal Mehra - Director, Cloud Computing, Oracle Consulting Making the Cloud Journey Matter There’s much more to cloud computing than cutting costs and closing data centers. In fact, cloud computing is fast becoming the engine for innovation and productivity in the digital age. Oracle Consulting Services contributes to our customers’ cloud journey by accelerating application provisioning and rapidly deploying enterprise solutions. By blending flexibility with standardization, our Middleware as a Service (MWaaS) offering is ensuring the success of many cloud initiatives. 10-Minute Application Provisioning Times at 7-Eleven As a case in point, 7-Eleven recently highlighted the scope, scale, and results of a cloud-powered environment. The world’s largest convenience store chain is rolling out a Digital Guest Experience (DGE) program across 8,500 stores in the U.S. and Canada. Everyday, 7-Eleven connects with tens of millions of customers through point-of-sale terminals, web sites, and mobile apps. Promoting customer loyalty, targeting promotions, downloading digital coupons, and accepting digital payments are all part of the roadmap for a comprehensive and rewarding customer experience. And what about the time required for deploying successive versions of this mission-critical solution? Ron Clanton, 7-Eleven's DGE Program Manager, Information Technology reported at Oracle Open World, " We are now able to provision new environments in less than 10 minutes. This includes the complete SOA Suite on Exalogic, and Enterprise Manager managing both the SOA Suite, Exalogic, and our Exadata databases ." OCS understands the complex nature of innovative solutions and has processes and expertise to help clients like 7-Eleven rapidly develop technology that enhances the customer experience with little more than the click of a button. OCS understood that the 7-Eleven roadmap required careful planning, agile development, and a cloud-capable environment to move fast and perform at enterprise scale. Business Agility Today’s business-savvy technology leaders face competing priorities as they confront the digital disruptions of the mobile revolution and next-generation enterprise applications. To support an innovation agenda, IT is required to balance competing priorities between development and operations groups. Standardization and consolidation of computing resources are the keys to success. With our operational and technical expertise promoting business agility, Oracle Consulting's deep Middleware as a Service experience can make a significant difference to our clients by empowering enterprise IT organizations with the computing environment they seek to keep up with the pace of change that digitally driven business units expect. Depending on the needs of the organization, this environment runs within a private, public, or hybrid cloud infrastructure. Through on-demand access to a shared pool of configurable computing resources, IT delivers the standard tools and methods for developing, integrating, deploying, and scaling next-generation applications. Gold profiles of predefined configurations eliminate the version mismatches among databases, application servers, and SOA suite components, delivered both by Oracle and other enterprise ISVs. These computing resources are well defined in business terms, enabling users to select what they need from a service catalog. Striking the Balance between Development and Operations As a result, development groups have the flexibility to choose among a menu of available services with descriptions of standard business functions, service level guarantees, and costs. Faced with the consumerization of enterprise IT, they can deliver the innovative customer experiences that seamlessly integrate with underlying enterprise applications and services. This cloud-powered development and testing environment accelerates release cycles to ensure agile development and rapid deployments. At the same time, the operations group is relying on certified stacks and frameworks, tuned to predefined environments and patterns. Operators can maintain a high level of security, and continue best practices for applications/systems monitoring and management. Moreover, faced with the challenges of delivering on service level agreements (SLAs) with the business units, operators can ensure performance, scalability, and reliability of the infrastructure. The elasticity of a cloud-computing environment – the ability to rapidly add virtual machines and storage in response to computing demands -- makes a difference for hardware utilization and efficiency. Contending with Continuous Change What does it take to succeed on the promise of the cloud? As the engine for innovation and productivity in the digital age, IT must face not only the technical transformations but also the organizational challenges of the cloud. Standardizing key technologies, resources, and services through cloud computing is only one part of the cloud journey. Managing relationships among multiple department and projects over time – developing the management, governance, and monitoring capabilities within IT – is an often unmentioned but all too important second part. In fact, IT must have the organizational agility to contend with continuous change. This is where a skilled consulting services partner can play a pivotal role as a trusted advisor in the successful adoption of cloud solutions. With a lifecycle services approach to delivering innovative business solutions, Oracle Consulting Services has expertise and a portfolio of services to help enterprise customers succeed on their cloud journeys as well as other converging mega trends .

    Read the article

  • commons-exec: Executing a program on the system PATH?

    - by Stefan Kendall
    I'm trying to execute a program (convert from ImageMagick, to be specific) whose parent folder exists on the path. Ergo, when I run convert from the command line, it runs the command. The following, however, fails: String command = "convert" CommandLine commandLine = CommandLine.parse(command); commandLine.addArgument(...) ... int exitValue = executor.execute(commandLine); If I specify the full path of the convert executable (C:\Program files\...) then this code works. If I don't do this, I get an exception thrown with exit value 4. How do I get commons-exec to recognize the system path?

    Read the article

  • Auto-generate WebControls

    - by Adrian K
    I want to generate .net code from a template so that it more rapid, so lazy developers (and I mean that in the nicest possible way!) don't have to write them in the IDE, compile them, etc... I know I can roll my own tool which generates the code using reflection (by reading in some text file, etc), but I just wondered if there was an easier than starting from scratch since this is what ASP.NET basically does already; so is there anyway to leverage this? E.g. to quote Peter A. Bromberg: Even an ASPX page with no code on it gets turned into an instance of the System.Web.UI.Page class. The page is parsed by the ASP.NET engine when it is first requested, and then its JIT compiled version is cached in the Temporary ASP.NET Files folder as long as the application is running and the .aspx page hasn't been changed. Ideally I want to auto-generate WebControls, but examples of anything closely related will do. C# Examples preferred also, but anything considered :)

    Read the article

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • When building a web application project, TFS 2008 builds two separate projects in _PublishedFolder.

    - by Steve Johnson
    I am trying to perform build automation on one of my web application projects built using VS 2008. The _PublishedWebSites contains two folders: Web and Deploy. I want TFS 2008 to generate only the deploy folder and not the web folder. Here is my TFSBuild.proj file: <Project ToolsVersion="3.5" DefaultTargets="Compile" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Import Project="$(MSBuildExtensionsPath)\Microsoft\WebDeployment\v9.0\Microsoft.WebDeployment.targets" /> <ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|AnyCPU"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>Any CPU</PlatformToBuild> </ConfigurationToBuild> </ItemGroup> <!--<ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|x64"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>x64</PlatformToBuild> </ConfigurationToBuild> </ItemGroup>--> <ItemGroup> <AdditionalReferencePath Include="C:\3PR" /> </ItemGroup> <Target Name="GetCopyToOutputDirectoryItems" Outputs="@(AllItemsFullPathWithTargetPath)" DependsOnTargets="AssignTargetPaths;_SplitProjectReferencesByFileExistence"> <!-- Get items from child projects first. --> <MSBuild Projects="@(_MSBuildProjectReferenceExistent)" Targets="GetCopyToOutputDirectoryItems" Properties="%(_MSBuildProjectReferenceExistent.SetConfiguration); %(_MSBuildProjectReferenceExistent.SetPlatform)" Condition="'@(_MSBuildProjectReferenceExistent)'!=''"> <Output TaskParameter="TargetOutputs" ItemName="_AllChildProjectItemsWithTargetPathNotFiltered"/> </MSBuild> <!-- Remove duplicates. --> <RemoveDuplicates Inputs="@(_AllChildProjectItemsWithTargetPathNotFiltered)"> <Output TaskParameter="Filtered" ItemName="_AllChildProjectItemsWithTargetPath"/> </RemoveDuplicates> <!-- Target outputs must be full paths because they will be consumed by a different project. --> <CreateItem Include="@(_AllChildProjectItemsWithTargetPath->'%(FullPath)')" Exclude= "$(BuildProjectFolderPath)/../../Development/Main/Web/Bin*.pdb; *.refresh; *.vshost.exe; *.manifest; *.compiled; $(BuildProjectFolderPath)/../../Development/Main/Web/Auth/MySoftware.dll; $(BuildProjectFolderPath)/../../Development/Main/Web/BinApp_Web_*.dll;" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always' or '%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'" > <Output TaskParameter="Include" ItemName="AllItemsFullPathWithTargetPath"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectoryAlways" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always'"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectory" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"/> </CreateItem> </Target> <!-- To modify your build process, add your task inside one of the targets below and uncomment it. Other similar extension points exist, see Microsoft.WebDeployment.targets. <Target Name="BeforeBuild"> </Target> <Target Name="BeforeMerge"> </Target> <Target Name="AfterMerge"> </Target> <Target Name="AfterBuild"> </Target> --> </Project> I want to build everything that the builtin Deploy project is doing for me. But I don't want the generated web project as it contains App_Web_xxxx.dll assemblies instead of a single compiled assembly. How can I do this?

    Read the article

  • How to add ServiceReference to an embedded file?

    - by TruMan1
    I have a class library. In one of the classes, I am adding a script reference on the page like this: protected override void OnPreRender(EventArgs e) { base.OnPreRender(e); if (this.Page != null) { ScriptManager sm = ScriptManager.GetCurrent(this.Page); ServiceReference reference = new ServiceReference("~/Sitefinity/Admin/Services/ContactsService.asmx"); reference.InlineScript = true; sm.Services.Add(reference); } } For the ServiceReference file path, is there a way to add an embedded file instead? I want to keep everything self-contained in my class library instead of dropping a file into the website folder.

    Read the article

  • How to save a value in a plist file in iphone?

    - by Warrior
    I am new to iphone development.I am using the below code to add the value to the plist but when check the plist after executing the code i dont see any value saved in the plist.Where do i go wrong? please help me out.The plist i have created is in the resource folder.Thanks. NSString *path = [[NSBundle mainBundle] bundlePath]; NSString *finalPath = [path stringByAppendingPathComponent:@"regvalue.plist"]; NSMutableDictionary* plistDict = [[NSMutableDictionary alloc] initWithContentsOfFile:finalPath]; [plistDict setValue:@"hello" forKey:@"choice"]; [plistDict writeToFile:finalPath atomically: YES]; For retrieving the value NSString *path = [[NSBundle mainBundle] bundlePath]; NSString *finalPath = [path stringByAppendingPathComponent:@"regvalue.plist"]; NSMutableDictionary* plistDict = [[NSMutableDictionary alloc] initWithContentsOfFile:finalPath]; NSString *value; value = [plistDict objectForKey:@"choice"]; NSLog(@"the value is %@",value); It gives only null value.Please help me out.Thanks.

    Read the article

  • Problem with connection to MS SQL Server database using SSMS

    - by Charles
    I have a database on line with Godaddy (who uses SQL Server 2005). They provide basic management tools, but tell you that for more advanced tools you can connect directly using SSMS. I followed their instructions to ensure my online database will accept remote connections, and can apparently log in using SSMS with success (after giving my hostname and access data). However: When attempting to expand the "Databases" folder tree, I get the following error: Failed to retrieve data for this request. (Microsoft.SqlServer.Management.Sdk.Sfc) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server& LinkId=20476 ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) The server principal "cmitchell" is not able to access the database "3pointdb" under the current security context. (Microsoft SQL Server, Error: 916) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=09.00.4262&EvtSrc=MSSQLServer&EvtID=916&LinkId=20476

    Read the article

< Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >