Daily Archives

Articles indexed Sunday June 30 2013

Page 10/13 | < Previous Page | 6 7 8 9 10 11 12 13  | Next Page >

  • Memory leak till crash due to HttpRequest

    - by Alex R.
    I played with HttpRequest and realized that the memory is not cleaned up after any request. After some time the running tab within Chrome will crash. Here is some testing code. Put a large sized file into the 'www' directory and set the URL in the code accordingly. import 'dart:async'; import 'dart:html'; void main() { const PATH = "http://127.0.0.1:3030/PATH_TO_FILE"; new Timer.periodic(new Duration(seconds:10), (Timer it)=>getString(PATH)); } void getString( String url){ HttpRequest.getString(url).then((String data){ }); } Is this really a bug or did I something wrong?

    Read the article

  • Rotating a movieclip slowly (so it looks like it was animated) [on hold]

    - by user2537021
    Im trying to rotate an object X degrees, nut i want that when the rotation happends it isent all suden, i want that you can see the object spining all the way to the X degree. (im try ing to me more clear) (I'm pretty new at this so please be patient with me) switch (enemywalk) { case 1: colx = enemy.y; enemy.y -= 115; if ( enemy.hitTestObject (pared1) || enemy.hitTestObject (pared2) ) { enemy.y = colx; } enemy.rotation = enemyrotate; //random generated number break; }

    Read the article

  • Working with a string as an array of characters

    - by Malfunction
    I'm having some trouble with a string represented as an array of characters. What I'd like to do, as I would do in java, is the following: while (i < chars.length) { char ch = chars[i]; if ((WORD_CHARS.indexOf(ch) >= 0) == punctuation) { String token = buffer.toString(); if (token.length() > 0) { parts.add(token); } buffer = new StringBuffer(); } buffer.append(ch); i++; } What I'm doing is something like this: while(i < strlen(chars)) { char ch = chars[i]; if(([WORD_CHARS rangeOfString:ch] >= 0) == punctuation) { NSString *token = buffer.toString(); if([token length] > 0) { [parts addObject:token]; } buffer = [NSMutableString string]; } [buffer append(ch)]; i++; } I'm not sure how I'm supposed to convert String token = buffer.toString(); to objective c, where buffer is an NSMutableString. Also, how do I check this if condition in objective c? if ((WORD_CHARS.indexOf(ch) >= 0) == punctuation) WORD_CHARS is an NSString. I'm also having trouble with appending ch to buffer. Any help is greatly appreciated.

    Read the article

  • JQuery Tabs; Google Map is shown with just one tile

    - by java_dude
    Yes this problem is common and I have seen numerous solution on this site but none of them were helpful. I know that re-sizing will fix the problem. Not a jQuery expert and I don't know how to re-size the map after the page has been loaded and Headline 3 is active. I have created a JFiddle example. Any help is appreciated. http://jsfiddle.net/KhwZS/1300/ Thanks Ignore the top comment and the links provided by @Dr.Molle as these links do not provide the solution that I have been looking for, check out the solution I have accepted.

    Read the article

  • c# counting identical strings from text file

    - by Winkz
    I have a foreach statement where I go through several lines from a text file, where I have trimmed down and sorted out the lines I need. What I want to do is count up on how many times an identical string is present. How do I do this? Here is my code. It's the second if statement where I am stuck: foreach (string line in lines.Where(l => l.Length >= 5)) { string a = line.Remove(0, 11); if ((a.Contains(mobName) && a.Contains("dies"))) { mobDeathCount++; } if (a.Contains(mobName) && a.Contains("drops")) { string lastpart = a.Substring(a.LastIndexOf("drops")); string modifiedLastpart = lastpart.Remove(0, 6); } Heres what some of the lines look like: a bag of coins a siog brandy a bag of coins a bag of coins the Cath Shield a tattered scroll So what im trying to do is counting up there are 3 lines with bag of coins. But i need to make it so that it can be everything, theres a drop lists thats huge. So cant add all of em, would take too long

    Read the article

  • Effortlessly resize images in Orchard 1.7

    - by Bertrand Le Roy
    I’ve written several times about image resizing in .NET, but never in the context of Orchard. With the imminent release of Orchard 1.7, it’s time to correct this. The new version comes with an extensible media pipeline that enables you to define complex image processing workflows that can automatically resize, change formats or apply watermarks. This is not the subject of this post however. What I want to show here is one of the underlying APIs that enable that feature, and that comes in the form of a new shape. Once you have enabled the media processing feature, a new ResizeMediaUrl shape becomes available from your views. All you have to do is feed it a virtual path and size (and, if you need to override defaults, a few other optional parameters), and it will do all the work for you of creating a unique URL for the resized image, and write that image to disk the first time the shape is rendered: <img src="@Display.ResizeMediaUrl(Path: img, Width: 59)"/> Notice how I only specified a maximum width. The height could of course be specified, but in this case will be automatically determined so that the aspect ratio is preserved. The second time the shape is rendered, the shape will notice that the resized file already exists on disk, and it will serve that directly, so caching is handled automatically and the image can be served almost as fast as the original static one, because it is also a static image. Only the URL generation and checking for the file existence takes time. Here is what the generated thumbnails look like on disk: In the case of those product images, the product page will download 12kB worth of images instead of 1.87MB. The full size images will only be downloaded as needed, if the user clicks on one of the thumbnails to get the full-scale. This is an extremely useful tool to use in your themes to easily render images of the exact right size and thus limit your bandwidth consumption. Mobile users will thank you for that.

    Read the article

  • BizTalk host throttling &ndash; Singleton pattern and High database size

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/06/30/biztalk-host-throttling-ndash-singleton-pattern-and-high-database-size.aspxI have worked for some days around the singleton pattern (for those unfamiliar with it, read this post by Victor Fehlberg) and have come across a few very interesting posts, among which one dealt with performance issues (here, also by Victor Fehlberg). Simply put: if you have an orchestration which implements the singleton pattern, then performances will continuously decrease as the orchestration receives and consumes messages, and that behavior is more obvious when the orchestration never ends (ie : it keeps looping and never terminates or completes). As I experienced the same kind of problem (actually I was alerted by SCOM, which told me that the host was being throttled because of High database size), I thought it would be a good idea to dig a little bit a see what happens deep inside BizTalk and thus understand the reasons for this behavior. NOTE: in this article, I will focus on this High database size throttling condition. I will try and work on the other conditions in some not too distant future… Test conditions The singleton orchestration For the purpose of this study, I have created the following orchestration, which is a very basic implementation of a singleton that piles up incoming messages, then does something else when a certain timeout has been reached without receiving another message: Throttling settings I have two distinct hosts : one that hosts the receive port (basic FILE port) : Ports_ReceiveHostone that hosts the orchestration : ProcessingHost In order to emphasize the throttling mechanism, I have modified the throttling settings for each of these hosts are as follows (all other parameters are set to the default value): [Throttling thresholds] Message count in database: 500 (default value : 50000) Evolution of performance counters when submitting messages Since we are investigating the High database size throttling condition, here are the performance counter that we should take a look at (all of them are in the BizTalk:Message Agent performance object): Database sizeHigh database sizeMessage delivery throttling stateMessage publishing throttling stateMessage delivery delay (ms)Message publishing delay (ms)Message delivery throttling state durationMessage publishing throttling state duration (If you are not used to Perfmon, I strongly recommend that you start using it right now: it is a wonderful tool that allows you to open the hood and see what is going on inside BizTalk – and other systems) Database size It is quite obvious that we will start by watching the database size and high database size counters, just to see when the first reaches the configured threshold (500) and when the second rings the alarm. NOTE : During this test I submitted 600 messages, one message at a time every 10ms to see the evolution of the counters we have previously selected. It might not show very well on this screenshot, but here is what happened: From 15:46:50 to 15:47:50, the database size for the Ports_ReceiveHost host (blue line) kept growing until it reached a maximum of 504.At 15:47:50, the high database size alert fires At first I was surprised by this result: why is it the database size of the receiving host that keeps growing since it is the processing host that piles up messages? Actually, it makes total sense. This counter measures the size of the database queue that is being filled by the host, not consumed. Therefore, the high database size alert is raised on the host that fills the queue: Ports_ReceiveHost. More information is available on the Public MPWiki page. Now, looking at the Message publishing throttling state for the receiving host (green line), we can see that a throttling condition has been reached at 15:47:50: We can also see that the Message publishing delay(ms) (blue line) has begun growing slowly from this point. All of this explains why performances keep decreasing when a singleton keeps processing new messages: the database size grows and when it has exceeded the Message count in database threshold, the host is throttled and the publishing delay keeps increasing. Digging further So, what happens to the database queue then? Is it flushed some day or does it keep growing and growing indefinitely? The real question being: will the host be throttled forever because of this singleton? To answer this question, I set the Message count in database threshold to 20 (this value is very low in order not to wait for too long, otherwise I certainly would have fallen asleep in front of my screen) and I submitted 30 messages. The test was started at 18:26. At 18:56 (ie : exactly 30min later) the throttling was stopped and the database size was divided by 2. 30 min later again, the database size had dropped to almost zero: I guess I’ll have to find some documentation and do some more testing before I sort this out! My guess is that some maintenance job is at work here, though I cannot tell which one Digging even further If we take a look at the Message delivery throttling state counter for the processing host, we can see that this host was also throttled during the submission of the 600 documents: The value for the counter was 1, meaning that Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent) value. We will see this another day… :) A last word Let’s end this article with a warning: DO NOT CHANGE THE THROTTLING SETTINGS LIGHTLY! The temptation can be great to just bypass throttling by setting very high values for each parameter (or zero in some cases, which simply disables throttling). Nevertheless, always keep in mind that this mechanism is here for a very good reason: prevent your BizTalk infrastructure from exploding!! So whatever you do with those settings, do a lot of testing and benchmarking!

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • SignalR Book

    - by eddraper
    Originally posted on: http://geekswithblogs.net/eddraper/archive/2013/06/30/signalr-book.aspx  Just finished SignalR: Real-time Application Development from Packt Publishing.  It’s a really great read for those wanting a holistic view of SignalR as a true application building block.  In many respects I think that SignalR could end up being a game changer for entirely new classes of both apps and frameworks. I’d love to see the puppy plugged into a rich client side ORM…

    Read the article

  • Thoughts on Build 2013

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2013/06/30/153294.aspxAnd so another Build conference has come to an end. Below are my thoughts/perspectives on various aspects of the event. I’ll do a separate blog post on my thoughts of the Build message for developers. The Good Moscone center was a great venue for Build! Easy to get around, easy to get to, and well maintained, it was a very comfortable conference venue. Yeah, the free swag was nice. Build has built up an expectation that attendees will always get something; it’ll be interesting to see how Microsoft maintains this expectation over the next few Build events. I still maintain that free swag should never be the main reason one attends an event, and for me this was definitely just an added bonus. I’m planning on trying to use the Surface as a dedicated 2nd device at work for meetings, I’ll share my experiences over the next few months. The hackathon event was a great idea, although personally I couldn’t justify spending the money on a conference registration just to spend the entire conference coding. Still, the apps that were created were really great and there was a lot of passion and excitement around the hackathon. I wonder if they couldn’t have had the hackathon on the Monday/Tuesday for those that wanted to participate so they didn’t miss any of the actual conference over Wed/Thurs. San Francisco was a great city to host Build. Getting from hotels to the conference center was very easy (well especially for me, I was only 3 blocks away) and the city itself felt very safe. However, if I never have to fly into SFO again I’ll be alright with that! Delays going into and out of SFO and both apparently were due to the airport itself. The Bad Build is one of those oddities on the conference landscape where people will pay to commit to attending an event without knowing anything about the sessions. We got our list of conference sessions when we registered on Tuesday, not before. And even then, we only got titles and not descriptions (those were eventually made available via the conference’s mobile application). I get it…they’re going to make announcements and they don’t want to give anything away through the session titles. But honestly, there wasn’t anything in the session titles that I would have considered a surprise. Breakfasts were brutal. High-carb pastries, donuts, and muffins with fruit and hard boiled eggs does not a conference breakfast make. I can’t believe that the difference between a continental breakfast per person and a hot breakfast buffet would have been a huge impact to a conference fee that was already around $2000. The vendor area was anemic. I don’t know why Microsoft forces the vendors into cookie-cutter booth areas (this year they were all made of plywood material). WPC, TechEd – booth areas there allow the vendors to be creative with their displays. Not so much for Build. Really odd was the lack of Microsoft’s own representation around Bing. In the day 1 keynote Microsoft made a big deal about Bing as an API. Yet there was nobody in the vendor area set up to provide more information or have discussions with about the Bing API. The Ugly Our name badges were NFC enabled. The purpose of this, beyond the vendors being able to scan your info, wasn’t really made clear. An attendee I talked to showed how you could get a reader app on your phone so you can scan other members cards and collect their contact info – which is a kewl idea; business cards are so 1990’s. But I was *shocked* at the amount of information that was on our name badges! Here’s what’s displayed on our name badge: - Name - Company - Twitter Handle I’m ok with that. But here’s what actually gets read: - Name - Company - Address Used for Registration - Phone Number Used for Registration So sharing that info with another attendee, they get way more of my info than just how to find me on Twitter! Microsoft, you need to fix this for the future. If vendors want to collect information on attendees, they should be able to collect an ID from the badge, then get a report with corresponding records afterwards. My personal information should not be so readily available, and without my knowledge! Final Verdict Maybe its my older age, maybe its where I’m at in life with family, maybe its where I’m at in my career, but when I consider whether a conference experience was valuable I get to the core reason I attend: opportunities to learn, opportunities to network, opportunities to engage with Microsoft. Opportunities to Learn:  Sessions I attended were generally OK, with some really stand out ones on Day 2. I would love to see Microsoft adopt the Dojo format for a portion of their sessions. Hands On Labs are dull, lecture style sessions are great for information sharing. But a guided hands-on coding session (Read: Dojo) provides the best of both worlds. Being that all content is publically available online to everyone (Build attendee or not), the value of attending the conference sessions is decreased. The value though is in the discussions that take part in person afterwards, which leads to… Opportunities to Network: I enjoyed getting together with old friends and connecting with Twitter friends in person for the first time. I also had an opportunity to meet total strangers. So from a networking perspective, Build was fantastic! I still think it would have been great to have an area for ad-hoc discussions – where speakers could announce they’d be available for more questions after their sessions, or attendees who wanted to discuss more in depth on a topic with other attendees could arrange space. Some people have no problems being outgoing and making these things happen, but others are not and a structured model is more attractive. Opportunities to Engage with Microsoft: Hit and miss on this one. Outside of the vendor area, unless you cornered or reached out to a speaker, there wasn’t any defined way to connect with blue badges. And as I mentioned above, Microsoft didn’t have full representation in the vendor area (no Bing). All in all, Build was a fun party where I was informed about some new stuff and got some free swag. Was it worth the time away from home and the hit to my PD budget? I’d say Somewhat. Build is a great informational conference, but I wouldn’t call it a learning conference. Considering that TechEd seems to be moving to more of an IT Pro focus, independent developer conferences seem to be the best value for those looking to learn and not just be informed. With the rapid development cycle Microsoft is embracing, we’re already seeing Build happening twice within a 12 month period. If that continues, the value of attending Build in person starts to diminish – especially with so much content available online. If Microsoft wants Build to be a must-attend event in the future, they need to start incorporating aspects of Tech Ed, past PDCs, and other conferences so those that want to leave with more than free swag have something to attract them.

    Read the article

  • Internal Server Error issue with Codeigniter. Cant open linked page

    - by Adam Perlis
    Hi I am VERY new to backend development. I am trying to setup a User Registration using Codeigniter. I have gotten pretty far but for some reason I can get the "Create Account button working. I have my site up and running on t-mrkt.com/mycisite would you guys mind taking a look and seeing where I might be going wrong? Here is a link to my files: https://www.dropbox.com/sh/z54j4qd50yf3t0p/MnnghGBB_D Thanks for your help! Adam

    Read the article

  • Managing hosts and iptables in scalable architecture

    - by hakunin
    Let's say I have a load balancer in front of 3 app servers. Let's say I also have these services available at certain IPs: Postgres server Redis server ElasticSearch server Memcached server 1 Memcached server 2 Memcached server 3 So that's 6 nodes at 6 different IP addresses. Naturally, every one of my 3 app servers needs to talk to these 6 servers above. Then, to make it a bit funkier, I also have 3 worker servers. And each worker also talks to the above 6 servers, but thankfully workers and apps never need to talk to each other. Now's the kicker. Everything is on Digital Ocean VPS. What that means is: you have no private network, no private IPs. You only have separate, random IP address on each machine. You can't mask them or anything. So in order to build a secure environment I would have to configure some iptables. For example: Open app servers be accessed by load balancer server Open redis, ES, PG, and each memcached servers to be accessed by each app's IP and each worker's IP This means that every time I add an app or worker I have to also reconfigure iptables in those above 6 servers to welcome the new app or worker. Is there a way to simplify this type of setup? I was thinking — what if there was a gateway machine between apps/workers and the above 6 machines. This way all the interaction would always happen via the gateway server, and when I add a new app or worker I wouldn't need to teach the 6 servers to let it in. If I went this route, then I'd hope a small 512mb server could handle that perhaps, and there wouldn't be almost any overhead. Or would there? Please help with best way to handle this situation. I would appreciate an answer as concrete as possible. I don't think this is too specific, because this general architecture is very common, and Digital Ocean is becoming increasingly popular. A concrete solution here would be much appreciated by many.

    Read the article

  • IPTables forward from only one ip on my server

    - by user1307079
    I was able to get my server to forward connections on a certain port to a different IP, but when I add -d to specify an IP to froward from, It does not work. This is what I am trying, iptables -t nat -A PREROUTING -d 173.208.230.107 -p tcp --dport 80 iptables -t nat -nvL-j DNAT --to-destination 38.105.20.226:80. It works fine without the -d. Here is my ifconfig dump: em1 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.106 Bcast:173.208.230.111 Mask:255.255.255.248 inet6 addr: fe80::2a0:d1ff:feed:d054/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:100058 errors:0 dropped:0 overruns:0 frame:0 TX packets:18941701 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:12779711 (12.1 MiB) TX bytes:825498499 (787.2 MiB) Memory:fbde0000-fbe00000 em1:9 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.107 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:10 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.108 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:11 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.109 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 em1:12 Link encap:Ethernet HWaddr 00:A0:D1:ED:D0:54 inet addr:173.208.230.110 Bcast:173.208.230.111 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fbde0000-fbe00000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

    Read the article

  • hp DL380 G4 won't boot with disk plugged into front USB

    - by Kev
    We outgrew a few older external USB backup drives, and purchased WD My Passport 1 TB USB 3.0 drives to replace them. When they are plugged into the front of our G4, it will blink forever after the BIOS (which is current, BTW) and never boot, even though the USB disks are not "bootable" per se. Our old drives did not exhibit this behaviour (so I don't think it's this type of issue that I've read about other servers.) The old drives were USB 2.0, but this shouldn't make a difference, AFAICT--the specs say all of the G4's USB ports are the same, 2.0, anyway, so I'm not sure how one port would handle a USB 3.0 device better than another. If we plug the new drives in one of the back slots, it boots fine. What's the cause? My concern is that the front USB port, and possibly the motherboard, might be starting to die. (We are experiencing other strange issues with them, or were initially, like intermittent file permissions errors despite wide-open ACL on these local drives, but some serverfault users have me convinced they may be coincidental software/security related issues.)

    Read the article

  • Logstash shipper & server on the samebox

    - by keftes
    I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents. I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM. In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch. My question: Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng --- write to redis and also read from redis --- output to elastic search ? Diagram of my setup: [client]-------syslog-ng--- [log server] ---syslog-ng <----logstash-shipper --- redis <----logstash-server ---- elastic-search <--- kibana

    Read the article

  • Installing SSL certificate on JBoss

    - by Teddy
    I have a server that runs JBoss. When I type bad URL to that server it gives me version like this: JBossWeb/2.0.1.GA - what version of JBoss that would be? A SSL certificate will be bought and provided for me so that I could install it in JBoss. I would really appreciate any HOWTO or any information how to install ready SSL certificate on JBoss. Do I need to generate any files with openssl, when this SSL certificate will be bought from some other company that sells SSL certificates? Thanks in advance for any help.

    Read the article

  • suphp how disable ls /

    - by Pol Hallen
    Using suphp, I set a php.ini to every virtual host. In php.ini I also setted: open_basedir = /home/site1 php script runs, but if I ve a script with ls / I can see whole root directory. How can disable this hole security? <VirtualHost *:80> ServerName site1 ServerAlias www.site1.com DirectoryIndex index.html index.htm DocumentRoot /home/site1/ suPHP_Engine on AddHandler x-httpd-php .php .php3 .php4 .php5 suPHP_AddHandler x-httpd-php # THIS READ php.ini suPHP_ConfigPath /home/site1/ <Directory /home/site1/> Options -Includes -Indexes -FollowSymLinks -ExecCGI -MultiViews AllowOverride none Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • Unix Shell/SSH config to allow TCP port forwarding without showing a command prompt

    - by Raphael K
    I'm running a Debian Linux. I'd like to have a user account that is able to connect via SSH for TCP-forwarding only, without a command prompt. e.g the following would work (from a remote computer): ssh -D1234 user@myhost but no command prompt would appear. Using a shell like /bin/false or /sbin/nologin is too restrictive as it doesn't even allow the user to log in. A shell that only allows the "exit" or Ctrl+D commands would do the job. I know that something similar is possible to allow only SFTP, but I can't find the equivalent for TCP forwarding. Thanks

    Read the article

  • APC fragmentation on EC2 Micro for Wordpress + W3TC

    - by Maarten Provo
    I'm trying to optimize APC for my Amazon EC2 Micro server running one Wordpress-site with W3TC. I've started with the settings advised by TechZilla in another topic but I keep getting high fragmentation with 50% of space being free. I've uploaded an image to http://www.maartenprovo.be/downloads/apc.jpg but I can't post it here since I need at least 10 reputation. What values can I optimize to prevent fragmentation? [apc] apc.enabled=1 apc.shm_segments=1 ;32M per WordPress install apc.shm_size=164M ;Leave at 2M or lower. WordPress does't have any file sizes close to 2M apc.max_file_size=2M ;Relative to the number of cached files apc.num_files_hint=1000 ;Relative to the size of WordPress apc.user_entries_hint=4096 ;The number of seconds a cache entry is allowed to idle in a slot before APC dumps the cache apc.ttl=7200 apc.user_ttl=7200 apc.gc_ttl=3600 ;Auto update chache files on change in WP-ADMIN or W3TC apc.stat=1 ;This MUST be 0, WP can have errors otherwise! apc.include_once_override=0 ;Only set to 1 while debugging apc.enable_cli=0 ;Allow 2 seconds after a file is created before it is cached to prevent users from seeing half-written/weird pages apc.file_update_protection=2 ;Ignore files apc.filters apc.slam_defense = 0 apc.write_lock = 1 apc.cache_by_default=1 apc.use_request_time=1 apc.mmap_file_mask=/var/tmp/apc.XXXXXX apc.stat_ctime=0 apc.canonicalize=1 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.lazy_classes=0 apc.lazy_functions=0

    Read the article

  • Skipping nginx PHP cache for certain areas of a site?

    - by DisgruntledGoat
    I have just set up a new server with nginx (which I am new to) and PHP. On my site there are essentially 3 different types of files: static content like CSS, JS, and some images (most images are on an external CDN) main PHP/MySQL database-driven website which essentially acts like a static site dynamic PHP/MySQL forum It is my understanding from this question and this page that the static files need no special treatment and will be served as fast as possible. I followed the answer from the above question to set up caching for PHP files and now I have a config like this: location ~ \.php$ { try_files $uri =404; fastcgi_cache one; fastcgi_cache_key $scheme$host$request_uri; fastcgi_cache_valid 200 302 304 30m; fastcgi_cache_valid 301 1h; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example$fastcgi_script_name; fastcgi_param HTTPS off; } However, now I want to prevent caching on the forum (either for everyone or only for logged-in users - haven't checked if the latter is feasible with the forum software). I've heard that "if is evil" inside location blocks, so I am unsure how to proceed. With the if inside the location block I would probably add this in the middle: if ($request_uri ~* "^/forum/") { fastcgi_cache_bypass 1; } # or possible this, if I'm able to cache pages for anonymous visitors if ($request_uri ~* "^/forum/" && $http_cookie ~* "loggedincookie") { fastcgi_cache_bypass 1; } Will that work fine, or is there a better way to achieve this?

    Read the article

  • How can I work around problems with certificate configuration in Remote Desktop Services?

    - by Michael Steele
    I am setting up a Remote Desktop Services farm, and am having trouble configuring certificates for it to use. A demonstration of the problem I'm seeing can be found in Step #4. At this point I am convinced that there are problems with the user interface, and am looking for ways around them. Is there any way to configure certificates in Remote Desktop Services so that the settings hold and are reflected in the GUI? If not, is there any way for me to verify that the settings are correct? Step #1 - Create certificate to be used. I've configured a certificate to use with RD Web Access. The certificate is stored with in the Certificates MMC on my RD Connection Broker, and I am configuring the farm from that computer. I found by letting RD Web Access generate its own certificate that the following properties are required: Enhanced Key Usage Server Authentication Client Authentication This may not be required, but the self-signed certificate includes it. Key Usage Digital Signature Key Agreement Subject Alternative Name DNS Name=domain.com Detour about self-signed certificate generation As a quick detour, I was able to work around a problem with creating self-signed certificates using powershell. The documentation for the New-RDCertificate cmdlet gives the following example: PS C:\> $password = ConvertTo-SecureString -string "password" -asplaintext -force New-RDCertificate -Role RDWebAccess -DnsName "test-rdwa.contoso.com" -Password $password -ConnectionBroker rdcb.contoso.com -ExportPath "c:\test-rdwa.pfx" Typing this into the shell will result in an error message claiming that a function, Get-Server cannot be found. Prior to using New-RDCertificate, you must import the RemoteDesktop Module with Import-Module RemoteDesktop. Step #2 - Observe out-of-box behavior The first time you visit the Deployment Properties dialog box by navigating to Server Manager - Remote Desktop Services - Collections and selecting "Edit Deployment Properties" from the "TASKS" dropdown list in the "COLLECTIONS" grouping, you will see the following screen: This window is misleading because the level field is listed as "Not Configured". If I understand correctly all three of the role services are using a self-signed certificate. For the RD Web Access role this can be verified by visiting the website: The certificate being used also appears in the Certificates MMC: Step #3 - Assign new certificate The Deployment Properties dialog box will allow me to select my existing certificate. The certificate must be placed within the local computers Certificates MMC in the "Personal" certificate store. The private key will need to be exportable, and you will need to provide the password. I temporarily exported my certificate to a file named temp.pfx with a password, and then imported it into Remote Desktop Services from there. Once this is done the GUI will indicate that it is ready to accept the new configuration. Once I click the "Apply" button, the GUI indicates success. This can be verified by visiting the RD Web Access web site a second time. There is no certificate error. Step #4 - The GUI fails to maintain its state If the GUI is closed and reopened, all of these settings appear to be lost. Actually, the certificate I configured is still being used. I am able to continue accessing the RD Web Access site without any certificate errors. Oddly, if I use the "Create new certificate..." button to generate a self-signed certificate this window will update to an "Untrusted" level. This setting will then be maintained through the opening and closing of the Deployment Properties dialog box. Is there anything I can do to have my settings appear to stick? I feel like something is wrong when the GUI claims I haven't fully configured certificates.

    Read the article

  • How can I copy a SQL Azure Database to a server on a different subscription?

    - by Tragedian
    I'm trying to create a copy of a SQL Azure database. The source and destination servers are associated with two different subscriptions, but they are located in the same data-centre. I've been reading Copying Databases in Windows SQL Azure Database and How to: Copy Your Databases (Windows Azure SQL Database) for instructions on this, but I'm not sure if my scenario is covered. I would like to use the CREATE DATABASE Database1B AS COPY OF Database1A; command, but I don't know what the implications are on the accounts used, or what I need to set up between the two databases before this command is possible. Has anybody achieved this type of copy and can elaborate?

    Read the article

  • How to setup a hyper-v domain with internet access

    - by fynnbob
    First off let me say that I'm not a network admin or server guy, I know very little about that stuff. What I'm trying to do is setup a virtualized domain using hyper-V. Here is the configuration: Physical Server: 4Mb RAM Windows Server 2008 R2 running Hyper-V Virtual Environment: One Domain Controller running Windows Server 2008 R2 One Client running Windows Server 2008 R2 I have been successful in setting up a virtual domain controller and adding a virtual client to that domain controller but I'm stuck at trying to give the virtual Environment Internet access. I can give the client VM Internet access if I remove them from the virtual domain but once I add them back to the virtual domain, Internet access is gone. I've read articles describing many different ways this can be done (using RRAS with NAT, using a wireless connection, etc...) but all of those articles only cover a small piece of the setup and also seem to be geared towards people who know there way around networking and servers which I don't. I'd like to know more but my thing is software development and I have my hands full trying to keep up with everything in that realm. I simply want to setup a virtual domain with Internet access for testing. Can anyone point me to any "for Dummy's" type information on how to setup this type of environment or can anyone provide this kind of step-by-step help. Any help would be very much appreciated.

    Read the article

  • MSTSC RDP over the public internet

    - by stuart Brand
    My first question so please be gentle :) I have a client who is insisting that they have to let their third party vendor support access to there server directly from the internet via RDP. Our policy does not allow direct access to the infrastructure from outside of the data centre for administration except from an approved VPN connection and then virtual desktop there on to the servers. I am now in the situation where I must give good reasons why it is dangerous to use RDP over the public internet. any help would be appreciated Thanks in advance Stuart

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13  | Next Page >