Search Results

Search found 27047 results on 1082 pages for 'multiple projects'.

Page 999/1082 | < Previous Page | 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006  | Next Page >

  • Blueprints for Oracle NoSQL Database

    - by dan.mcclary
    I think that some of the most interesting analytic problems are graph problems.  I'm always interested in new ways to store and access graphs.  As such, I really like the work being done by Tinkerpop to create Open Source Software to make property graphs more accessible over a wide variety of datastores.  Since key-value stores like Oracle NoSQL Database are well-suited to storing property graphs, I decided to extend the Blueprints API to work with it.  Below I'll discuss some of the implementation details, but you can check out the finished product here: http://github.com/dwmclary/blueprints-oracle-nosqldb.  What's in a Property Graph?  In the most general sense, a graph is just a collection of vertices and edges.  Vertices and edges can have properties: weights, names, or any number of other traits.  In an undirected graph, edges connect vertices without direction.  A directed graph specifies that all edges have a head and a tail --- a direction.  A multi-graph allows multiple edges to connect two vertices.  A "property graph" encompasses all of these traits. Key-Value Stores for Property Graphs Key-Value stores like Oracle NoSQL Database tend to be ideal for implementing property graphs.  First, if any vertex or edge can have any number of traits, we can treat it as a hash map.  For example: Vertex["name"] = "Mary" Vertex["age"] = 28 Vertex["ID"] = 12345  and so on.  This is a natural key-value relationship: the key "name" maps to the value "Mary."  Moreover if we maintain two hash maps, one for vertex objects and one for edge objects, we've essentially captured the graph.  As such, any scalable key-value store is fertile ground for planting graphs. Oracle NoSQL Database as a Scalable Graph Database While Oracle NoSQL Database offers useful features like tunable consistency, what lends it to storing property graphs is the storage guarantees around its key structure.  Keys in Oracle NoSQL Database are divided into two parts: a major key and a minor key.  The storage guarantee is simple.  Major keys will be distributed across storage nodes, which could encompass a large number of servers.  However, all minor keys which are children of a given major key are guaranteed to be stored on the same storage node.  For example, the vertices: /Personnel/Vertex/1  and /Personnel/Vertex/2 May be stored on different servers, but /Personnel/Vertex/1-/name and  /Personnel/Vertex/1-/age will always be on the same server.  This means that we can structure our graph database such that retrieving all the properties for a vertex or edge requires I/O from only a single storage node.  Moreover, Oracle NoSQL Database provides a storeIterator which allows us to store a huge number of vertices and edges in a scalable fashion.  By storing the vertices and edges as major keys, we guarantee that they are distributed evenly across all storage nodes.  At the same time we can use a partial major key to iterate over all the vertices or edges (e.g. we search over /Personnel/Vertex to iterate over all vertices). Fork It! The Blueprints API and Oracle NoSQL Database present a great way to get started using a scalable key-value database to store and access graph data.  However, a graph store isn't useful without a good graph to work on.  I encourage you to fork or pull the repository, store some data, and try using Gremlin or any other language to explore.

    Read the article

  • How can I thoroughly evaluate a prospective employer?

    - by glenviewjeff
    We hear much about code smells, test smells, and even project smells, but I have heard no discussion about employer "smells" outside of the Joel Test. After much frustration working for employers with a bouquet of unpleasant corporate-culture odors, I believe it's time for me to actively seek a more mature development environment. I've started assembling a list of questions to help vet employers by identifying issues during a job interview, and am looking for additional ideas. I suppose this list could easily be modified by an employer to vet an employee as well, but please answer from the interviewee's perspective. I think it would be important to ask many of these questions of multiple people to find out if consistent answers are given. For the most part, I tried to put the questions in each section in the order they could be asked. An undesired answer to an early question will often make follow-ups moot. Values What constitutes "well-written" software? What attributes does a good developer have? Same question for manager. Process Do you have a development process? How rigorously do you follow it? How do you decide how much process to apply to each project? Describe a typical project lifecycle. Ask the following if they don't come up otherwise: Waterfall/iterative: How much time is spent in upfront requirements gathering? upfront design? Testing Who develops tests (developers or separate test engineers?) When are they developed? When are the tests executed? How long do they take to execute? What makes a good test? How do you know you've tested enough? What percentage of code is tested? Review What is the review process like? What percentage of code is reviewed? Design? How frequently can I expect to participate as code/design reviewer/reviewee? What are the criteria applied to review and where do the criteria come from? Improvement What new tools and techniques have you evaluated or deployed in the past year? What training courses have your employees been given in the past year? What will I be doing for the first six months in your company (hinting at what kind of organized mentorship/training has been thought through, if any) What changes to your development process have been made in the past year? How do you improve and learn from your mistakes as an organization? What was your organizations biggest mistake in the past year, and how was it addressed? What feedback have you given to management lately? Was it implemented? If not, why? How does your company use "best practices?" How do you seek them out from the outside or within, and how do you share them with each other? Ethics Tell me about an ethical problem you or your employees experienced recently and how was it resolved? Do you use open-source software? What open-source contributions have you made? Follow-Ups I liked what @jim-leonardo said on this Stack Overflow question: Really a thing to ask yourself: "Does this person seem like they are trying to recruit me and make me interested?" I think this is one of the most important bits. If they seem to be taking the attitude that the only one being interviewed is you, then they probably will treat you poorly. Good interviewers understand they have to sell the position as much as the candidate needs to sell themselves. @SethP added: Glassdoor.com is a good web site for researching potential employers. It contains information about how specific companies conduct interviews...

    Read the article

  • Deploying an SSL Application to Windows Azure &ndash; The Dark Secret

    - by ToStringTheory
    When working on an application that had been in production for some time, but was about to have a shopping cart added to it, the necessity for SSL certificates came up.  When ordering the certificates through the vendor, the certificate signing request (CSR) was generated through the providers (http://register.com) web interface, and within a day, we had our certificate. At first, I thought that the certification process would be the hard part…  Little did I know that my fun was just beginning… The Problem I’ll be honest, I had never really secured a site before with SSL.  This was a learning experience for me in the first place, but little did I know that I would be learning more than the simple procedure.  I understood a bit about SSL already, the mechanisms in how it works – the secure handshake, CA’s, chains, etc…  What I didn’t realize was the importance of the CSR in the whole process.  Apparently, when the CSR is created, a public key is created at the same time, as well as a private key that is stored locally on the PC that generated the request.  When the certificate comes back and you import it back into IIS (assuming you used IIS to generate the CSR), all of the information is combined together and the SSL certificate is added into your store. Since at the time the certificate had been ordered for our site, the selection to use the online interface to generate the CSR was chosen, the certificate came back to us in 5 separate files: A root certificate – (*.crt file) An intermediate certifcate – (*.crt file) Another intermediate certificate – (*.crt file) The SSL certificate for our site – (*.crt file) The private key for our certificate – (*.key file) Well, in case you don’t know much about Windows Azure and SSL certificates, the first thing you should learn is that certificates can only be uploaded to Azure if they are in a PFX package – securable by a password.  Also, in the case of our SSL certificate, you need to include the Private Key with the file.  As you can see, we didn’t have a PFX file to upload. If you don’t get the simple PFX from your hosting provider, but rather the multiple files, you will soon find out that the process has turned from something that should be simple – to one that borders on a circle of hell… Probably between the fifth and seventh somewhere… The Solution The solution is to take the files that make up the certificates chain and key, and combine them into a file that can be imported into your local computers store, as well as uploaded to Windows Azure.  I can not take the credit for this information, as I simply researched a while before finding out how to do this. Download the OpenSSL for Windows toolkit (Win32 OpenSSL v1.0.1c) Install the OpenSSL for Windows toolkit Download and move all of your certificate files to an easily accessible location (you'll be pointing to them in the command prompt, so I put them in a subdirectory of the OpenSSL installation) Open a command prompt Navigate to the folder where you installed OpenSSL Run the following command: openssl pkcs12 -export –out {outcert.pfx} –inkey {keyfile.key}      –in {sslcert.crt} –certfile {ca1.crt} –certfile (ca2.crt) From this command, you will get a file, outcert.pfx, with the sum total of your ssl certificate (sslcert.crt), private key {keyfile.key}, and as many CA/chain files as you need {ca1.crt, ca2.crt}. Taking this file, you can then import it into your own IIS in one operation, instead of importing each certificate individually.  You can also upload the PFX to Azure, and once you add the SSL certificate links to the cloud project in Visual Studio, your good to go! Conclusion When I first looked around for a solution to this problem, there were not many places online that had the information that I was looking for.  While what I ended up having to do may seem obvious, it isn’t for everyone, and I hope that this can at least help one developer out there solve the problem without hours of work!

    Read the article

  • Is Cloud Security Holding Back Social SaaS?

    - by Mike Stiles
    The true promise of social data co-mingling with enterprise data to influence and inform social marketing (all marketing really) lives in cloud computing. The cloud brings processing power, services, speed and cost savings the likes of which few organizations could ever put into action on their own. So why wouldn’t anyone jump into SaaS (Software as a Service) with both feet? Cloud security. Being concerned about security is proper and healthy. That just means you’re a responsible operator. Whether it’s protecting your customers’ data or trying to stay off the radar of regulatory agencies, you have plenty of reasons to make sure you’re as protected from hacking, theft and loss as you can possibly be. But you also have plenty of reasons to not let security concerns freeze you in your tracks, preventing you from innovating, moving the socially-enabled enterprise forward, and keeping up with competitors who may not be as skittish regarding SaaS technology adoption. Over half of organizations are transferring sensitive or confidential data to the cloud, an increase of 10% over last year. With the roles and responsibilities of CMO’s, CIO’s and other C’s changing, the first thing you should probably determine is who should take point on analyzing cloud software options, providers, and policies. An oft-quoted Ponemon Institute study found 36% of businesses don’t have a cloud security policy at all. So that’s as good a place to start as any. What applications and data are you comfortable housing in the cloud? Do you have a classification system for data that clearly spells out where data types can go and how they can be used? Who, both internally and at the cloud provider, will function as admins? What are the different levels of admin clearance? Will your security policies and procedures sync up with those of your cloud provider? The key is verifiable trust. Trust in cloud security is actually going up. 1/3 of organizations polled say it’s the cloud provider who should be responsible for data protection. And when you look specifically at SaaS providers, that expectation goes up to 60%. 57% “strongly agree” or “agree” there’s more confidence in cloud providers’ ability to protect data. In fact, some businesses bypass the “verifiable” part of verifiable trust. Just over half have no idea what their cloud provider does to protect data. And yet, according to the “Private Cloud Vision vs. Reality” InformationWeek Report, 82% of organizations say security/data privacy are one of the main reasons they’re still holding the public cloud at arm’s length. That’s going to be a tough position to maintain, because just as social is rapidly changing the face of marketing, big data is rapidly changing the face of enterprise IT. Netflix, who’s particularly big on the benefits of the cloud, says, "We're systematically disassembling the corporate IT components." An enterprise can never realize the full power of big data, nor get the full potential value out of it, if it’s unwilling to enable the integrations and dataset connections necessary in the cloud. Because integration is called for to reduce fragmentation, a standardized platform makes a lot of sense. With multiple components crafted to work together, you’re maximizing scalability, optimization, cost effectiveness, and yes security and identity management benefits. You can see how the incentive is there for cloud companies to develop and add ever-improving security features, making cloud computing an eventual far safer bet than traditional IT. @mikestilesPhoto: stock.xchng

    Read the article

  • St. Louis Day of .NET 2010

    - by Scott Spradlin
    Register now at http://www.stlouisdayofdotnet.com/registration.aspx The Date This year's conference will be held on Friday and Saturday, August 20-21, 2010, at the Ameristar Conference Center in St. Charles, Missouri.  Sessions will begin at 8:00 a.m. and run through 4:30 p.m. on both days.  Registration and sign-in will open at 7:00 a.m. on Friday morning, and will run throughout the event. The Venue Based on the almost unanimous feedback from last year's event, we are very excited to bring our conference back to the Ameristar Conference Center. The Ameristar has worked with us to offer a great rate on their large suites, should you be traveling from out-of-town -- or are just interested in a night away from home.  Attendees can book a suite at a discounted rate of only $139/night, which is a substantial discount from their standard rates.  We encourage you take the opportunity to hang around, spend the night, and enjoy the social events and networking opportunities that we have planned. If you are interested in taking advantage of the discounted hotel rate, you can reserve your room online at Ameristar's Online Registration Site, using the special offer code: GDOTH10.  You can also call the hotel's reservation number at (636) 940-4301 and let them know you are attending the St. Louis Day of .NET 2010 to receive your discounted rates. The Content All attendees will have access to over 80 technical sessions by many great regional and national technology experts, covering a wide range of .NET development topics.  In addition to refreshments throughout the event, all attendees will be provided with breakfast and lunch on both days of the conference. You will find sessions on many of the most current .NET development topics including: Visual Studio .NET 2010 Silverlight 4.0 Windows 7 Series Phone Development ASP.NET MVC DotNetNuke SharePoint 2010 Architecture Windows Presentation Foundation (WPF) And much, much more... This year's event will also include many informal "Open Space" sessions where all attendees with similar interests can discuss current trends or issues they are facing in today's real-world development environments. Finally, all attendees are invited to a social networking event at the HOME Nightclub at the Ameristar, which will be held on the Friday evening of the conference. The Cost The cost of this year's conference is $200 per attendee.  However, for a limited time we are offering a $75 discount for early registrants. To take advantage of this discounted rate, you must register on our site prior to July 10, 2010.  We accept Visa, MasterCard, and American Express.  In addition, this year we allow for a single user of our site to easily register multiple attendees at once. To register, please visit the official St. Louis Day of .NET site at www.stldodn.com, and click on the "Registration" tab. For More Information And for the most up-to-the-minute information on the event, please follow us online: Twitter:  @stldodn Facebook: http://www.facebook.com/stldodn We strongly encourage you to share this email, as well as the attached flier, with your peers and colleagues, and anyone else you think might be interested in this exciting event. If you have any questions regarding registration, you can email us at [email protected] and we will be happy to address them. Sponsors We are extremely thankful to the many great sponsors who are partnering with us this year to help make the St. Louis Day of .NET 2010 a huge success. (There are still sponsorship opportunities available. For complete information, visit the sponsor page on the web site.)

    Read the article

  • The All New Hotmail Looks Very Impressive [Video Tour]

    - by Gopinath
    With loads of new new features being introduced into GMail every now and then, Microsoft can’t sit and relax any more. Microsoft realized this and worked hard to introduce really impressive features in upcoming version of Windows Live Hotmail that was previewed couple of days ago. Most of the new features announced in the upcoming version are focusing on the important need of email users – de-clutter the mail box and effectively manage email over load easily. Here is the list highlight of new features New Features Sweep away clutter – This is the most impressive in the set of new features. It allows you to manage email overload. If you’ve subscribed to a newsletter but decided to not to allow it into your inbox, you can activate the sweep feature to move all the messages of the newsletter in to a folder other than your inbox. This may sound similar to filters option in GMail but the workflow is very easy in Hotmail. Quickly find message – Easy to use options are provided to see mails in separate views likes mails from contacts, social networking mail, mails from e-mail subscription services, etc. Now it’s easy to prioritize email checking like how you wish to. I prefer to check mails from my contacts first, then social networking messages and then the newsletter subscriptions. Improved spam detection – The span detection rules are tightened for better spam protection and also hotmail learns from user actions to effectively catch spam No more mail box storage restrictions – With a smart decision of Microsoft, users  no longer need to worry about the storage restrictions of their mail box – large attachments of hotmail can be stored in Windows Live SkyDrive. With Hotmail, we’ve combined the simplicity of sending photos through email with the power of Windows Live SkyDrive so that you can send up to 200 photos, each up to 50 MB in size, all in a single email. You can send all your vacation photos at once without worrying about attachment limits, Excellent Integration With Office Web Apps -  View and editing of office documents attached to the emails are made very easy by integrating Office Web Apps with Hotmail. When you receive a document/presentation/spreadsheet in hotmail, you can view it, edit it, save it or even you can send the modified document to original sender – all these without leaving hotmail. Inline viewing options for Photos, Videos, Social Network Messages – You can view photos embedded in the mail as slideshows(with the help of SilverLight), YouTube  & Hulu videos can be played inline  and track shipping notifications. Threaded conversations – emails in Hotmail are grouped just like it happens in GMail Others - enhanced account protection, full-session SSL, multiple email accounts, subfolders, contact management Video Tour Of New Features Here is an impressive video tour of new Hotmail features. When are these new features coming to Hotmail? Majority of the new features announced today are rolled out in coming weeks gradually to all the users. But advanced features like Office Integration with Hotmail is expected to take couple of months for general availability. Will You Switch back to Hotmail? Will these features lure GMail/Yahoo users to switch back to Hotmail? May be not immediately but these features may hold the existing users from leaving Hotmail. I used Hotmail, in the pre GMail era and now I use  Hotmail id only to sign-in to Microsoft websites that requites Hotmail authentication. It’s been years since I composed a new email in Hotmail. Even though the new features announced by Hotmail are very impressive, I like the way how GMail rapidly brings new features at regular intervals. If Hotmail also keeps innovating with new features at regular intervals, then there are good chances for it’s old users to return home. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • ORA-4031 Troubleshooting

    - by [email protected]
      QUICKLINK: Note 396940.1 Troubleshooting and Diagnosing ORA-4031 Error Note 1087773.1 : ORA-4031 Diagnostics Tools [Video]   Have you observed an ORA-04031 error reported in your alert log? An ORA-4031 error is raised when memory is unavailable for use or reuse in the System Global Area (SGA).  The error message will indicate the memory pool getting errors and high level information about what kind of allocation failed and how much memory was unavailable.  The challenge with ORA-4031 analysis is that the error and associated trace is for a "victim" of the problem.   The failing code ran into the memory limitation, but in almost all cases it was not part of the root problem.    Looking for the best way to diagnose? When an ORA-4031 error occurs, a trace file is raised and noted in the alert log if the process experiencing the error is a background process.   User processes may experience errors without reports in the alert log or traces generated.   The V$SHARED_POOL_RESERVED view will show reports of misses for memory over the life of the database. Diagnostics scripts are available in Note 430473.1 to help in analysis of the problem.  There is also a training video on using and interpreting the script data Note 1087773.1. 11g DiagnosabilityStarting with Oracle Database 11g Release 1, the Diagnosability infrastructure was introduced which places traces and core files into a location controlled by the DIAGNOSTIC_DEST initialization parameter when an incident, such as an ORA-4031 occurs. For earlier versions, the trace file will be written to either USER_DUMP_DEST (if the error was caught in a user process) or BACKGROUND_DUMP_DEST (if the error was caught in a background process like PMON or SMON). The trace file contains vital information about what led to the error condition.  Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support[Video]Oracle Configuration Manager (OCM)Oracle Configuration Manager (OCM) works with My Oracle Support to enable proactive support capability that helps you organize, collect and manage your Oracle configurations.Oracle Configuration Manager Quick Start GuideNote 548815.1: My Oracle Support Configuration Management FAQ Note 250434.1: BULLETIN: Learn More About My Oracle Support Configuration Manager    Common Causes/Solutions The ORA-4031 can occur for many different reasons.  Some possible causes are: SGA components too small for workload Auto-tuning issues Fragmentation due to application design Bug/leaks in memory allocationsFor more on the 4031 and how this affects the SGA, see Note 396940.1 Troubleshooting and Diagnosing ORA-4031 Error Because of the multiple potential causes, it is important to gather enough diagnostics so that an appropriate solution can be identified.  However, most commonly the cause is associated with configuration tuning.   Ensuring that MEMORY_TARGET or SGA_TARGET are large enough to accommodate workload can get around many scenarios.  The default trace associated with the error provides very high level information about the memory problem and the "victim" that ran into the issue.   The data in the default trace is not going to point to the root cause of the problem. When migrating from 9i to 10g and higher, it is necessary to increase the size of the Shared Pool due to changes in the basic design of the shared memory area. Note 270935.1 Shared pool sizing in 10gNOTE: Diagnostics on the errors should be investigated as close to the time of the error(s) as possible.  If you must restart a database, it is not feasible to diagnose the problem until the database has matured and/or started seeing the problems again. Note 801787.1 Common Cause for ORA-4031 in 10gR2, Excess "KGH: NO ACCESS" Memory Allocation ***For reference to the content in this blog, refer to Note.1088239.1 Master Note for Diagnosing ORA-4031 

    Read the article

  • SQL Sentry Truth-Telling and Disk Configuration

    - by AjarnMark
    Recently, SQL Sentry told me something about my SQL Server disk configurations that I just didn’t want to believe, but alas, it was true. Several days ago I posted my First Impressions of the SQL Sentry Power Suite.  Today’s post could fall into the category of, “Hey, as long as you have that fancy tool…”  Unfortunately, it also falls into the category of an overloaded worker taking someone else’s word for the truth, not verifying it with independent fact-checking, and then making decisions based on that.  Here’s my story… I’m not exactly an Accidental DBA (or Involuntary DBA as Paul Randal calls it).  I came to this company five years ago as a lead application developer with extensive experience in database design and development.  I worked my way into management, and along the way, took over the DBA responsibilities.  Fortunately, our systems run pretty smoothly most of the time, but I’m always looking for ways to make them better and to fit into my understanding of best practices.  When I took over as DBA, I inherited a SQL 2000 server with about 30 databases on it supporting our main systems, and a SQL 2005 server with multiple instances.  Both of these servers were configured with the Operating System and Application files on the C drive, data files on a different drive letter, and log files on a third drive letter.  Even before I took over as DBA, I verified that this was true with a previous server administrator, and that these represented actual separate disks.  He stated that they did, and I thought that all was well. Then one day, I’m poking around inside the SQL Sentry Performance Advisor, checking out features as I am evaluating whether to purchase the product, and I come across a Disk Configuration section.  The first thing I notice is that the drives do not have the proper partition offset, which was not at all surprising to me given the age of the installation and the relative newness of that topic.  But what threw me for a loop was that the graphic display appeared to be telling me that I did not in fact have three separate drives (or arrays) but rather had two, and that the log files were merely on a separate volume on the same physical array as the OS.  I figured that I must be reading it wrong so I scanned the Help file, but that just seemed to confirm my interpretation.  Then I thought, “there must be something wrong with the demo version of the software!  This can’t be right!”  But just to double-check, I went to our current server admin to talk it over with him, and sure enough, SQL Sentry was telling the truth! I was stunned!  I quickly went through the grieving process…denial…anger…reconciliation.  Here was something that I thought was such a basic truth that was turned upside down.  OK, granted, this wasn’t disastrous.  Our databases didn’t suddenly grind to a halt.  I didn’t get calls late at night inquiring about the sudden downturn in performance.  But it was a bit of a shock to the system, in a good way, to jolt me out of taking what I had believed as the truth for granted, and instead to Trust, but Verify! Yes, before someone else points it out, I know that there are”free” disk management tools built-in to Windows that would have told me the same thing if I had only looked at them; I did not have to buy a fancy tool to tell me that, but the fact is, until I was evaluating the tool, I had just gone with what I was told, and never bothered to check what was actually there. So, what things do you believe to be true but you actually never verified?

    Read the article

  • Ubuntu 13.04 installation issues: unable to handle kernel paging request error

    - by user173944
    I wish I could say that I’ve done more for the Linux community as of recent but I am very VERY new to all of this and I feel very much in over my head. I figured I would install Ubuntu. on my computer and then I would learn and contribute to the community simultaneously. I will try to be as detailed as I can, please ask questions if you need clarification. I installed Ubuntu. 13.04 (64-bit) on my dell Inspiron 1501. This has an AMD Turion 64-bit TL-56 1.8 Ghz mobile processor. It is a dual core. It has an ATI Radeon Xpress 1150 chipset in it as well. As of right now I only have a total of 2Ghz ram, however I was planning on upgrading that in the near future so I opted for the 64-bit Ubuntu. 13.04. I first tried the live CD and everything seemed to be functioning correctly other than the wireless (but that's not the issue at hand, there are plenty of guides on the internet on how to get that functioning). The internet worked just fine when it was plugged in so no issues there. However, once I went from that to installing 13.04 (just 13.04, no dual partitioning... I want this computer to run strictly Ubuntu.) it did not work. It took me into a shell that I could not type anything into. In this shell it said Bug: unable to handle kernel paging and then it called a bunch of traces and froze up. I had to hard reset the laptop. I tried the boot-repair program multiple times with many different settings and typically after starting up the laptop would say something along the lines of recursive errors. will attempt to fix and then it would attempt to fix a couple of things, and then the computer would freeze up after the text said end trace... so I had to hard reset it again. I'm not an impatient person either, when I say it would freeze up it would be for a period of at least 15 minutes each time before I decided to hard reset. I attempted to install 12.10 on it instead and I got the same exact message, and when I ran boot-repair it did the same exact thing as before. I am currently in the process of running memtest64+ on the computer's memory, though I really don't believe that, nor any of the hardware is at fault due to the fact that it was still running windows vista perfectly when I had decided to switch over to Ubuntu. so far the memtest has came back just fine without any errors, but I’ve only been running it for approximately an hour. So this is the situation I’m in. I did notice when I was using the live disk that the video driver needed updated so I performed that, though I’m fairly certain that has nothing to do with this. I have also attempted (though I’m not certain that my attempt was successful in accomplishing what I had planned) to manually edit the grub settings by making acpi=0 along top of adding nomodeset to the boot commands. Like I said, I’m not sure I did that correctly though, but I’m fairly certain I did. If anyone needs any more information I will be more than happy to provide it, I will post back once I get the full results of the memtest. I very much appreciate any ideas anyone else has, I’ve been at this for a few days to no avail... thank you

    Read the article

  • Create and Track Your Own License Keys with PowerShell

    - by BuckWoody
    SQL Server used to have  cool little tool that would let you track your licenses. Microsoft didn’t use it to limit your system or anything, it was just a place on the server where you could put that this system used this license key. I miss those days – we don’t track that any more, and I want to make sure I’m up to date on my licensing, so I made my own. Now, there are a LOT of ways you could do this. You could add an extended property in SQL Server, add a table to a tracking database, use a text file, track it somewhere else, whatever. This is just the route I chose; if you want to use some other method, feel free. Just sharing here. Warning Serious problems might occur if you modify the registry incorrectly by using Registry Editor or by using another method. These problems might require that you reinstall the operating system. Microsoft cannot guarantee that these problems can be solved. Modify the registry at your own risk. And this is REALLY important. I include a disclaimer at the end of my scripts, but in this case you’re modifying your registry, and that could be EXTREMELY dangerous – only do this on a test server – and I’m just showing you how I did mine. It isn’t an endorsement or anything like that, and this is a “Buck Woody” thing, NOT a Microsoft thing. See this link first, and then you can read on. OK, here’s my script: # Track your own licenses # Write a New Key to be the License Location mkdir HKCU:\SOFTWARE\Buck   # Write the variables - one sets the type, the other sets the number, and the last one holds the key New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseType" -value "Processor" # Notice the Dword value here - this one is a number so it needs that. Keep this on one line! New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseNumber" -propertytype DWord -value 4 New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseKey" -value "ABCD1234"   # Read them all $LicenseKey = Get-Item HKCU:\Software\Buck $Licenses = Get-ItemProperty $LicenseKey.PSPath foreach ($License in $LicenseKey.Property) { $License + "=" + $Licenses.$License }   Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Surviving MATLAB and R as a Hardcore Programmer

    - by dsimcha
    I love programming in languages that seem geared towards hardcore programmers. (My favorites are Python and D.) MATLAB is geared towards engineers and R is geared towards statisticians, and it seems like these languages were designed by people who aren't hardcore programmers and don't think like hardcore programmers. I always find them somewhat awkward to use, and to some extent I can't put my finger on why. Here are some issues I have managed to identify: (Both): The extreme emphasis on vectors and matrices to the extent that there are no true primitives. (Both): The difficulty of basic string manipulation. (Both): Lack of or awkwardness in support for basic data structures like hash tables and "real", i.e. type-parametric and nestable, arrays. (Both): They're really, really slow even by interpreted language standards, unless you bend over backwards to vectorize your code. (Both): They seem to not be designed to interact with the outside world. For example, both are fairly bulky programs that take a while to launch and seem to not be designed to make simple text filter programs easy to write. Furthermore, the lack of good string processing makes file I/O in anything but very standard forms near impossible. (Both): Object orientation seems to have a very bolted-on feel. Yes, you can do it, but it doesn't feel much more idiomatic than OO in C. (Both): No obvious, simple way to get a reference type. No pointers or class references. For example, I have no idea how you roll your own linked list in either of these languages. (MATLAB): You can't put multiple top level functions in a single file, encouraging very long functions and cut-and-paste coding. (MATLAB): Integers apparently don't exist as a first class type. (R): The basic builtin data structures seem way too high level and poorly documented, and never seem to do quite what I expect given my experience with similar but lower level data structures. (R): The documentation is spread all over the place and virtually impossible to browse or search. Even D, which is often knocked for bad documentation and is still fairly alpha-ish, is substantially better as far as I can tell. (R): At least as far as I'm aware, there's no good IDE for it. Again, even D, a fairly alpha-ish language with a small community, does better. In general, I also feel like MATLAB and R could be easily replaced by plain old libraries in more general-purpose langauges, if sufficiently comprehensive libraries existed. This is especially true in newer general purpose languages that include lots of features for library writers. Why do R and MATLAB seem so weird to me? Are there any other major issues that you've noticed that may make these languages come off as strange to hardcore programmers? When their use is necessary, what are some good survival tips? Edit: I'm seeing one issue from some of the answers I've gotten. I have a strong personal preference, when I analyze data, to have one script that incorporates the whole pipeline. This implies that a general purpose language needs to be used. I hate having to write a script to "clean up" the data and spit it out, then another to read it back in a completely different environment, etc. I find the friction of using MATLAB/R for some of my work and a completely different language with a completely different address space and way of thinking for the rest to be a huge source of friction. Furthermore, I know there are glue layers that exist, but they always seem to be horribly complicated and a source of friction.

    Read the article

  • Tuning Default WorkManager - Advantages and Disadvantages

    - by Murali Veligeti
    Before discussing on Tuning Default WorkManager, lets have a brief introduction on What is Default WorkManger Before Weblogic Server 9.0 release, we had the concept of Execute Queues. WebLogic Server (before WLS 9.0), processing was performed in multiple execute queues. Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks. In addition to the default execute queue, weblogic.kernel.default, there were pre-configured queues dedicated to internal administrative traffic, such as weblogic.admin.HTTP and weblogic.admin.RMI.Users could control thread usage by altering the number of threads in the default queue, or configure custom execute queues to ensure that particular applications had access to a fixed number of execute threads, regardless of overall system load. From WLS 9.0 release onwards WebLogic Server uses is a single thread pool (single thread pool which is called Default WorkManager), in which all types of work are executed. WebLogic Server prioritizes work based on rules you define, and run-time metrics, including the actual time it takes to execute a request and the rate at which requests are entering and leaving the pool.The common thread pool changes its size automatically to maximize throughput. The queue monitors throughput over time and based on history, determines whether to adjust the thread count. For example, if historical throughput statistics indicate that a higher thread count increased throughput, WebLogic increases the thread count. Similarly, if statistics indicate that fewer threads did not reduce throughput, WebLogic decreases the thread count. This new strategy makes it easier for administrators to allocate processing resources and manage performance, avoiding the effort and complexity involved in configuring, monitoring, and tuning custom executes queues. The Default WorkManager is used to handle thread management and perform self-tuning.This Work Manager is used by an application when no other Work Managers are specified in the application’s deployment descriptors. In many situations, the default Work Manager may be sufficient for most application requirements. WebLogic Server’s thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them. The default work-manager, as its name tells, is the work-manager defined by default.Thus, all applications deployed on WLS will use it. But sometimes, when your application is already in production, it's obvious you can't take your EAR / WAR, update the deployment descriptor(s) and redeploy it.The default work-manager belongs to a thread-pool, as initial thread-pool comes with only five threads, that's not much. If your application has to face a large number of hits, you may want to start with more than that.Well, that's quite easy. You have  two option to do so.1) Modify the config.xmlJust add the following line(s) in your server definition : <server> <name>AdminServer</name> <self-tuning-thread-pool-size-min>100</self-tuning-thread-pool-size-min> <self-tuning-thread-pool-size-max>200</self-tuning-thread-pool-size-max> [...] </server> 2) Adding some JVM parameters Add the following system property in setDomainEnv.sh/setDomainEnv.cmd or startWebLogic.sh/startWebLogic.cmd : -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=100 Reboot WLS and see the option has been taken into account . Disadvantage: So far its fine. But here there is an disadvantage in tuning Default WorkManager. Internally Weblogic Server has many work managers configured for different types of work.  if we run out of threads in the self-tuning pool(because of system property -Dweblogic.threadpool.MaxPoolSize) due to being undersized, then important work that WLS might need to do could be starved.  So, while limiting the self-tuning would limit the default WorkManager and internally it also limits all other internal WorkManagers which WLS uses.So the best alternative is to override the default WorkManager that means creating a WorkManager for the Application and assign the WorkManager for the application instead of tuning the Default WorkManager.

    Read the article

  • Using the Script Component as a Conditional Split

    This is a quick walk through on how you can use the Script Component to perform Conditional Split like behaviour, splitting your data across multiple outputs. We will use C# code to decide what does flows to which output, rather than the expression syntax of the Conditional Split transformation. Start by setting up the source. For my example the source is a list of SQL objects from sys.objects, just a quick way to get some data: SELECT type, name FROM sys.objects type name S syssoftobjrefs F FK_Message_Page U Conference IT queue_messages_23007163 Shown above is a small sample of the data you could expect to see. Once you have setup your source, add the Script Component, selecting Transformation when prompted for the type, and connect it up to the source. Now open the component, but don’t dive into the script just yet. First we need to select some columns. Select the Input Columns page and then select the columns we want to uses as part of our filter logic. You don’t need to choose columns that you may want later, this is just the columns used in the script itself. Next we need to add our outputs. Select the Inputs and Outputs page.You get one by default, but we need to add some more, it wouldn’t be much of a split otherwise. For this example we’ll add just one more. Click the Add Output button, and you’ll see a new output is added. Now we need to set some properties, so make sure our new Output 1 is selected. In the properties grid change the SynchronousInputID property to be our input Input 0, and  change the ExclusionGroup property to 1. Now select Ouput 0 and change the ExclusionGroup property to 2. This value itself isn’t important, provided each output has a different value other than zero. By setting this property on both outputs it allows us to split the data down one or the other, making each exclusive. If we left it to 0, that output would get all the rows. It can be a useful feature allowing you to copy selected rows to one output whilst retraining the full set of data in the other. Now we can go back to the Script page and start writing some code. For the example we will do a very simple test, if the value of the type column is U, for user table, then it goes down the first output, otherwise it ends up in the other. This mimics the exclusive behaviour of the conditional split transformation. public override void Input0_ProcessInputRow(Input0Buffer Row) { // Filter all user tables to the first output, // the remaining objects down the other if (Row.type.Trim() == "U") { Row.DirectRowToOutput0(); } else { Row.DirectRowToOutput1(); } } The code itself is very simple, a basic if clause that determines which of the DirectRowToOutput methods we call, there is one for each output. Of course you could write a lot more code to implement some very complex logic, but the final direction is still just a method call. If we now close the script component, we can hook up the outputs and test the package. Your numbers will vary depending on the sample database but as you can see we have clearly split out input data into two outputs. As a final tip, when adding the outputs I would normally rename them, changing the Name in the Properties grid. This means the generated methods follow the pattern as do the path label shown on the design surface, making everything that much easier to recognise.

    Read the article

  • #altnetseattle &ndash; REST Services

    - by GeekAgilistMercenary
    Below are the notes I made in the REST Architecture Session I helped kick off with Andrew. RSS, ATOM, and such needed for better discovery.  i.e. there still is a need for some type of discovery. Difficult is modeling behaviors in a RESTful way.  ??  Invoking some type of state against an object.  For instance in the case of a POST vs. a GET.  The GET is easy, comes back as is, but what about a POST, which often changes some state or something. Challenge is doing multiple workflows with stateful workflows.  How does batch work.  Maybe model the batch as a resource. Frameworks aren’t particularly part of REST, REST is REST.  But point argued that REST is modeled, or part of modeling a state machine of some sort… ? Nothing is 100% reliable w/ REST – comparisons drawn with TCP/IP.  Sufficient probability is made however for the communications, but the idea of a possible failure has to be built into the usage model of REST. Ruby on Rails / RESTfully, and others used.  What were their issues, what do they do.  ATOM feeds, object serialized, using LINQ to XML w/ this.  No state machine libraries. Idempotent areas around REST and single change POST changes are inherent in the architecture. REST – one of the constrained languages is for the interaction w/ the system.  Limiting what can be done on the resources.  - disagreement, there is no agreed upon REST verbs. Sam Ruby – RESTful services.  Expanded the verbs within REST/HTTP pushes you off the web.  Of the existing verbs POST leaves the most up for debate. Robert Reem used Factory to deal with the POST to handle the new state.  The POST identifying what it just did by the return. Different states are put into POST, so that new prospective verbs, without creating verbs for REST/HTTP can be used to advantage without breaking universal clients. Biggest issue with REST services is their lack of state, yet it is also one of their biggest strengths.  What happens is that the client takes up the often onerous task of handling all state, state machines, and other extraneous resource management.  All the GETs, POSTs, DELETEs, INSERTs get all pushed into abstraction.  My 2 cents is that this in a way ends up pushing a huge proprietary burden onto the REST services often removing the point of REST to be simple and to the point. WADL does provide discovery and some state control (sort of?) Statement made, "WADL" isn't needed.  The JSON, XML, or other client side returned data handles this. I then applied the law of 2 feet rule for myself and headed to finish up these notes, post to the Wiki, and figure out what I was going to do next.  For the original Wiki entry check it out here. I will be adding more to this post with a subsequent post.  Please do feel free to post your thoughts and ideas about this, as I am sure everyone in the session will have more for elaboration.

    Read the article

  • Standards Corner: Preventing Pervasive Monitoring

    - by independentid
     Phil Hunt is an active member of multiple industry standards groups and committees and has spearheaded discussions, creation and ratifications of industry standards including the Kantara Identity Governance Framework, among others. Being an active voice in the industry standards development world, we have invited him to share his discussions, thoughts, news & updates, and discuss use cases, implementation success stories (and even failures) around industry standards on this monthly column. Author: Phil Hunt On Wednesday night, I watched NBC’s interview of Edward Snowden. The past year has been tumultuous one in the IT security industry. There has been some amazing revelations about the activities of governments around the world; and, we have had several instances of major security bugs in key security libraries: Apple's ‘gotofail’ bug  the OpenSSL Heartbleed bug, not to mention Java’s zero day bug, and others. Snowden’s information showed the IT industry has been underestimating the need for security, and highlighted a general trend of lax use of TLS and poorly implemented security on the Internet. This did not go unnoticed in the standards community and in particular the IETF. Last November, the IETF (Internet Engineering Task Force) met in Vancouver Canada, where the issue of “Internet Hardening” was discussed in a plenary session. Presentations were given by Bruce Schneier, Brian Carpenter,  and Stephen Farrell describing the problem, the work done so far, and potential IETF activities to address the problem pervasive monitoring. At the end of the presentation, the IETF called for consensus on the issue. If you know engineers, you know that it takes a while for a large group to arrive at a consensus and this group numbered approximately 3000. When asked if the IETF should respond to pervasive surveillance attacks? There was an overwhelming response for ‘Yes'. When it came to 'No', the room echoed in silence. This was just the first of several consensus questions that were each overwhelmingly in favour of response. This is the equivalent of a unanimous opinion for the IETF. Since the meeting, the IETF has followed through with the recent publication of a new “best practices” document on Pervasive Monitoring (RFC 7258). This document is extremely sensitive in its approach and separates the politics of monitoring from the technical ones. Pervasive Monitoring (PM) is widespread (and often covert) surveillance through intrusive gathering of protocol artefacts, including application content, or protocol metadata such as headers. Active or passive wiretaps and traffic analysis, (e.g., correlation, timing or measuring packet sizes), or subverting the cryptographic keys used to secure protocols can also be used as part of pervasive monitoring. PM is distinguished by being indiscriminate and very large scale, rather than by introducing new types of technical compromise. The IETF community's technical assessment is that PM is an attack on the privacy of Internet users and organisations. The IETF community has expressed strong agreement that PM is an attack that needs to be mitigated where possible, via the design of protocols that make PM significantly more expensive or infeasible. Pervasive monitoring was discussed at the technical plenary of the November 2013 IETF meeting [IETF88Plenary] and then through extensive exchanges on IETF mailing lists. This document records the IETF community's consensus and establishes the technical nature of PM. The draft goes on to further qualify what it means by “attack”, clarifying that  The term is used here to refer to behavior that subverts the intent of communicating parties without the agreement of those parties. An attack may change the content of the communication, record the content or external characteristics of the communication, or through correlation with other communication events, reveal information the parties did not intend to be revealed. It may also have other effects that similarly subvert the intent of a communicator.  The past year has shown that Internet specification authors need to put more emphasis into information security and integrity. The year also showed that specifications are not good enough. The implementations of security and protocol specifications have to be of high quality and superior testing. I’m proud to say Oracle has been a strong proponent of this, having already established its own secure coding practices. 

    Read the article

  • Windows Azure Recipe: Social Web / Big Media

    - by Clint Edmonson
    With the rise of social media there’s been an explosion of special interest media web sites on the web. From athletics to board games to funny animal behaviors, you can bet there’s a group of people somewhere on the web talking about it. Social media sites allow us to interact, share experiences, and bond with like minded enthusiasts around the globe. And through the power of software, we can follow trends in these unique domains in real time. Drivers Reach Scalability Media hosting Global distribution Solution Here’s a sketch of how a social media application might be built out on Windows Azure: Ingredients Traffic Manager (optional) – can be used to provide hosting and load balancing across different instances and/or data centers. Perfect if the solution needs to be delivered to different cultures or regions around the world. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model has extensibility points to hook into other identity providers as well. Web Role – hosts the core of the web application and presents a central social hub users. Database – used to store core operational, functional, and workflow data for the solution’s web services. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Tables (optional) – for semi-structured data streams that don’t need relational integrity such as conversations, comments, or activity streams, tables provide a faster and more flexible way to store this kind of historical data. Blobs (optional) – users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training These links point to online Windows Azure training labs and resources where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Sharing configuration settings between Windows Azure roles

    - by theo.spears
    If you are working on a medium-large Windows Azure project it's likely it will involve more than one role, for example separate web and worker roles. Unfortunately although all the windows azure configuration settings are stored in a single cscfg file, there is no way to share configuration settings between multiple roles. This means you have to duplicate common settings like connection strings across all your roles. There is an open Connect issue about this topic, but Microsoft have not said when they will fix it. In the mean time I've put together a dirty dirty hack cunning workaround that creates a fake role containing your shared configuration settings, and copies it to all roles as part of the build process. Here's how you set it up: 1. Download the zip file attached to this post, and unzip it into the folder containing your Azure project (not your solution folder). 2. Edit your csdef and cscfg files to include the placeholder project ServiceDefinition.csdef<?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="AzureSpendNotifier" http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition%22"http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="GLOBAL"> <ConfigurationSettings> <Setting name="ExampleSetting" /> </ConfigurationSettings> </WorkerRole> <WorkerRole name="MyWorker"> <ConfigurationSettings> </ConfigurationSettings> </WorkerRole> <WebRole name="MyWeb"> <Sites> <Site name="Web"> <Bindings> <Binding name="WebEndpoint" endpointName="WebEndpoint" /> </Bindings> </Site> </Sites> <ConfigurationSettings> </ConfigurationSettings> </WebRole> </ServiceDefinition> ServiceConfiguration.cscfg<?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="AzureSpendNotifier" xmlns=http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration osFamily="1" osVersion="*"> <Role name="GLOBAL"> <ConfigurationSettings> <Setting name="ExampleSetting" value="Hello World" /> </ConfigurationSettings> <Instances count="1" /> </Role> <Role name="MyWorker"> <Instances count="1" /> <ConfigurationSettings> </ConfigurationSettings> </Role> <Role name="MyWeb"> <Instances count="1" /> <ConfigurationSettings> </ConfigurationSettings> </Role> </ServiceConfiguration> It is important that all your roles contain a ConfigurationSettings entry in both cscfg and csdef files, even if it's empty- otherwise the shared configuration settings will not be inserted. 3. Open your azure deployment (.ccproj) project in notepad, and add the highlighted line below: ... <Import Project="$(CloudExtensionsDir)Microsoft.CloudService.targets" /> <Import Project="globalsettings/globalsettings.targets" /> </Project> It is important you add this below the Microsoft.CloudService.targets import line, as it replaces some of the rules defined in that file. Visual studio will prompt you to reload the project, say yes. At this point you will have a new Azure role called 'GLOBAL' with settings you can edit through the visual studio properties panel as normal. This role will never be deployed, but any settings you add to it will be copied to all your other roles when deployed or tested locally within visual studio.

    Read the article

  • AWS .NET SDK v2: setting up queues and topics

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/13/aws-.net-sdk-v2-setting-up-queues-and-topics.aspxFollowing on from my last post, reading from SQS queues with the new SDK is easy stuff, but linking a Simple Notification Service topic to an SQS queue is a bit more involved. The AWS model for topics and subscriptions is a bit more advanced than in Azure Service Bus. SNS lets you have subscribers on multiple different channels, so you can send a message which gets relayed to email address, mobile apps and SQS queues all in one go. As the topic owner, when you request a subscription on any channel, the owner needs to confirm they’re happy for you to send them messages. With email subscriptions, the user gets a confirmation request from Amazon which they need to reply to before they start getting messages. With SQS, you need to grant the topic permission to write to the queue. If you own both the topic and the queue, you can do it all in code with the .NET SDK. Let’s say you want to create a new topic, a new queue as a topic subscriber, and link the two together. Creating the topic is easy with the SNS client (which has an expanded name, AmazonSimpleNotificationServiceClient, compare to the SQS class which is just called QueueClient): var request = new CreateTopicRequest(); request.Name = TopicName; var response = _snsClient.CreateTopic(request); TopicArn = response.TopicArn; In the response from AWS (which I’m assuming is successful), you get an ARN – Amazon Resource Name – which is the unique identifier for the topic. We create the queue using the same code from my last post, AWS .NET SDK v2: the message-pump pattern, and then we need to subscribe the queue to the topic. The topic creates the subscription request: var response = _snsClient.Subscribe(new SubscribeRequest { TopicArn = TopicArn, Protocol = "sqs", Endpoint = _queueClient.QueueArn }); That response will give you an ARN for the subscription, which you’ll need if you want to set attributes like RawMessageDelivery. Then the SQS client needs to confirm the subscription by allowing the topic to send messages to it. The SDK doesn’t give you a nice mechanism for doing that, so I’ve extended my AWS wrapper with a method that encapsulates it: internal void AllowSnsToSendMessages(TopicClient topicClient) { var policy = Policies.AllowSendFormat.Replace("%QueueArn%", QueueArn).Replace("%TopicArn%", topicClient.TopicArn); var request = new SetQueueAttributesRequest(); request.Attributes.Add("Policy", policy); request.QueueUrl = QueueUrl; var response = _sqsClient.SetQueueAttributes(request); } That builds up a policy statement, which gets added to the queue as an attribute, and specifies that the topic is allowed to send messages to the queue. The statement itself is a JSON block which contains the ARN of the queue, the ARN of the topic, and an Allow effect for the sqs:SendMessage action: public const string AllowSendFormat= @"{ ""Statement"": [ { ""Sid"": ""MySQSPolicy001"", ""Effect"": ""Allow"", ""Principal"": { ""AWS"": ""*"" }, ""Action"": ""sqs:SendMessage"", ""Resource"": ""%QueueArn%"", ""Condition"": { ""ArnEquals"": { ""aws:SourceArn"": ""%TopicArn%"" } } } ] }"; There’s a new gist with an updated QueueClient and a new TopicClient here: Wrappers for the SQS and SNS clients in the AWS SDK for .NET v2. Both clients have an Ensure() method which creates the resource, so if you want to create a topic and a subscription you can use:  var topicClient = new TopicClient(“BigNews”, “ImListening”); And the topic client has a Subscribe() method, which calls into the message pump on the queue client: topicClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. topicClient.Publish(message); So you can isolate all the fiddly bits and use SQS and SNS with a similar interface to the Azure SDK.

    Read the article

  • Nest reinvents smoke detectors. Introduces smart and talking smoke detector that keeps quite when you wave

    - by Gopinath
    Nest, the leading smart thermostat maker has introduced a smart home device today- Nest Protect, a smart, talking smoke & carbon monoxide detector that can quite when you wave your hand. Less annoyances and more intelligence Smoke detectors are around for hundreds of years and playing a major role in providing safety from fire accidents at home. But the technology of these devices is stale and there is no major innovation for the past several years. With the introduction of Nest Protect, the landscape of smoke detectors is all set to change just like how Nest thermostat redefined the industry two years ago. Nest Protect is internet enabled and equipped with motion- and smoke-detection sensors so that when it starts beeping you can silence it by waving hand instead of doing circus feats to turn off the alarm. Everyone who cooks in a home equipped with smoke detector would know how annoying it is to turn off sensitive smoke detectors that goes off control quite often. Apart from addressing the annoyances of regular smoke detector, Nest Protect has talking capabilities. It can alert users with clear & actionable instructions when it detects a danger. Instead of harsh beeps it actually speak to you so you know what is happening. It will tell you what smoke it has detected and in which room it is detected. Multiple Nest Protects installed in a home can communicate with each other. Lets say that there is a smoke in bed room, the Nest Protect installed in bed room shares this information to all Nest Protects installed in the home and your kitchen device can alert you that there is a smoke in bed room. There is an App for that The internet enabled Nest Protect has an app to view its status and various alerts. When the Protect is running on low battery it alerts you to replace them soon. If there is a smoke at home and you are away, you will get message alerts. The app works on all major smartphones as well as tablets. Auto shuts down gas furnaces/heaters on smoke Apart from forming a network with other Nest Protect devices installed at home, they can also communicate with Nest Thermostat if it is installed. When carbon monoxide is detected it can shut off your gas furnace automatically. Also with the help of motion detectors it improves Nest Thermostat’s auto-away functionality. It looks elegant and costs a lot more than a regular smoke detector Just like Nest Thermostat, Nest Protect is elegant and adorable. You just fall in love with it the moment you see it. It’s another master piece from the designer of Apple’s iPod. All is good with the Nest Protect, except the price!! It costs whooping $129, which is almost 4 times more expensive than the best selling conventional thermostats available at $30. A single bed room apartment would require at least 3 detectors and it costs around $390 to install Nest Protects compared to 90$ required for conventional smoke detectors. Though Nest Thermostat is an expensive one compared to conventional thermostats, it offered great savings through its intelligent auto-away feature. Users were able to able to see returns on their investments. If Nest Protect also can provide good return on investment the it will be very successful.

    Read the article

  • Thoughts on my new template language/HTML generator?

    - by Ralph
    I guess I should have pre-faced this with: Yes, I know there is no need for a new templating language, but I want to make a new one anyway, because I'm a fool. That aside, how can I improve my language: Let's start with an example: using "html5" using "extratags" html { head { title "Ordering Notice" jsinclude "jquery.js" } body { h1 "Ordering Notice" p "Dear @name," p "Thanks for placing your order with @company. It's scheduled to ship on {@ship_date|dateformat}." p "Here are the items you've ordered:" table { tr { th "name" th "price" } for(@item in @item_list) { tr { td @item.name td @item.price } } } if(@ordered_warranty) p "Your warranty information will be included in the packaging." p(class="footer") { "Sincerely," br @company } } } The "using" keyword indicates which tags to use. "html5" might include all the html5 standard tags, but your tags names wouldn't have to be based on their HTML counter-parts at all if you didn't want to. The "extratags" library for example might add an extra tag, called "jsinclude" which gets replaced with something like <script type="text/javascript" src="@content"></script> Tags can be optionally be followed by an opening brace. They will automatically be closed at the closing brace. If no brace is used, they will be closed after taking one element. Variables are prefixed with the @ symbol. They may be used inside double-quoted strings. I think I'll use single-quotes to indicate "no variable substitution" like PHP does. Filter functions can be applied to variables like @variable|filter. Arguments can be passed to the filter @variable|filter:@arg1,arg2="y" Attributes can be passed to tags by including them in (), like p(class="classname"). You will also be able to include partial templates like: for(@item in @item_list) include("item_partial", item=@item) Something like that I'm thinking. The first argument will be the name of the template file, and subsequent ones will be named arguments where @item gets the variable name "item" inside that template. I also want to have a collection version like RoR has, so you don't even have to write the loop. Thoughts on this and exact syntax would be helpful :) Some questions: Which symbol should I use to prefix variables? @ (like Razor), $ (like PHP), or something else? Should the @ symbol be necessary in "for" and "if" statements? It's kind of implied that those are variables. Tags and controls (like if,for) presently have the exact same syntax. Should I do something to differentiate the two? If so, what? This would make it more clear that the "tag" isn't behaving like just a normal tag that will get replaced with content, but controls the flow. Also, it would allow name-reuse. Do you like the attribute syntax? (round brackets) How should I do template inheritance/layouts? In Django, the first line of the file has to include the layout file, and then you delimit blocks of code which get stuffed into that layout. In CakePHP, it's kind of backwards, you specify the layout in the controller.view function, the layout gets a special $content_for_layout variable, and then the entire template gets stuffed into that, and you don't need to delimit any blocks of code. I guess Django's is a little more powerful because you can have multiple code blocks, but it makes your templates more verbose... trying to decide what approach to take Filtered variables inside quotes: "xxx {@var|filter} yyy" "xxx @{var|filter} yyy" "xxx @var|filter yyy" i.e, @ inside, @ outside, or no braces at all. I think no-braces might cause problems, especially when you try adding arguments, like @var|filter:arg="x", then the quotes would get confused. But perhaps a braceless version could work for when there are no quotes...? Still, which option for braces, first or second? I think the first one might be better because then we're consistent... the @ is always nudged up against the variable. I'll add more questions in a few minutes, once I get some feedback.

    Read the article

  • Windows Azure Recipe: Consumer Portal

    - by Clint Edmonson
    Nearly every company on the internet has a web presence. Many are merely using theirs for informational purposes. More sophisticated portals allow customers to register their contact information and provide some level of interaction or customer support. But as our understanding of how consumers use the web increases, the more progressive companies are taking advantage of social web and rich media delivery to connect at a deeper level with the consumers of their goods and services. Drivers Cost reduction Scalability Global distribution Time to market Solution Here’s a sketch of how a Windows Azure Consumer Portal might be built out: Ingredients Web Role – this will host the core of the solution. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in. Access Control (optional) – if identity needs to be tracked within the solution, the access control service combined with the Windows Identity Foundation framework provides out-of-the-box support for several social media platforms including Windows LiveID, Google, Yahoo!, Facebook. It also has a provider model to allow integration with other platforms as well. Caching (optional) – for sites with high traffic with lots of read-only data and lists, the distributed in-memory caching service can be used to cache and serve up static data at higher scale and speed than direct database requests. It can also be used to manage user session state. Blob Storage (optional) – for sites that serve up unstructured data such as documents, video, audio, device drivers, and more. The data is highly available and stored redundantly across data centers. Each entry in blob storage is provided with it’s own unique URL for direct access by the browser. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training Labs These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Is there a good [and modern] reason to not have static HTML pages with AJAX content , rather than generate pages?

    - by user1725
    Assumptions: We don't care about IE6, and Noscript users. Lets pretend we have the following design concept: All your pages are HTML/CSS that create the ascetics, layout, colours, general design related things. Lets pretend this basic code below is that: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <link href="/example.css" rel="stylesheet" type="text/css"/> <script src="example.js" type="text/javascript"></script> <head> <body> <div class="left"> </div> <div class="mid"> </div> <div class="right"> </div> </body> </html> Which in theory should produce, with the right CSS, three vertical columns on the web page. Now, here's the root of the question, what are the serious advantages and/or disadvantages of loading the content of these columns (lets assume they are all indeed dynamic content, not static) via AJAX requests, or have the content pre-set with a scripting language? So for instance, we would have, in the AJAX example, lets asume jquery is used on-load: //Multiple http requests $("body > div.left").load("./script.php?content=news"); $("body > div.right").load("./script.php?content=blogs"); $("body > div.mid").load("./script.php?content=links"); OR--- //Single http request $.ajax({ url: './script.php?content=news|blogs|links', method: 'json', type: 'text', success: function (data) { $("body > div.left").html(data.news); $("body > div.right").html(data.blogs); $("body > div.mid").html(data.links); } }) Verses doing this: <body> <div class="left"> <?php echo function_returning_news(); ?> </div> <div class="mid"> <?php echo function_returning_blogs(); ?> </div> <div class="right"> <?php echo function_returning_links(); ?> </div> </body> I'm personally thinking right now that doing static HTML pages is a better method, my reasoning is: I've separated my data, logic, and presentation (ie, "MVC") code. I can make changes to one without others. Browser caches mean I'm just getting server load mostly for the content, not the presentation wrapped around it. I could turn my "script.php" into a more robust API for the website. But I'm not certain or clear that these are legitimately good reasons, and I'm not confidently aware of other issues that could happen, so I would like to know the pros-and-cons, so to speak.

    Read the article

  • forEach and Facelets - a bugfarm just waiting for harvest

    - by Duncan Mills
    An issue that I've encountered before and saw again today seems worthy of a little write-up. It's all to do with a subtle yet highly important difference in behaviour between JSF 2 running with JSP and running on Facelets (.jsf pages). The incident I saw today can be seen as a report on the ADF EMG bugzilla (Issue 53) and in a blog posting by Ulrich Gerkmann-Bartels who reported the issue to the EMG. Ulrich's issue nicely shows how tricky this particular gochya can be. On the surface, the problem is squarely the fault of MDS but underneath MDS is, in fact, innocent. To summarize the problem in a simpler testcase than Ulrich's example, here's a simple fragment of code: <af:forEach var="item" items="#{itemList.items}"> <af:commandLink id="cl1" text="#{item.label}" action="#{item.doAction}"  partialSubmit="true"/> </af:forEach> Looks innocent enough right? We see a bunch of links printed out, great. The issue here though is the id attribute. Logically you can kind of see the problem. The forEach loop is creating (presumably) multiple instances of the commandLink, but only one id is specified - cl1. We know that IDs have to be unique within a JSF component tree, so that must be a bad thing?  The problem is that JSF under JSP implements some hacks when the component tree is generated to transparently fix this problem for you. Behind the scenes it ensures that each instance really does have a unique id. Really nice of it to do so, thank you very much. However, (you could see this coming), the same is not true when running with Facelets  (this is under 11.1.2.n)  in that case, what you put for the id is what you get, and JSF does not mess around in the background for you. So you end up with a component tree that contains duplicate ids which are only created at runtime.  So subtle chaos can ensue.  The symptoms are wide and varied, from something pretty obscure such as the combination Ulrich uncovered, to something as frustrating as your ActionListener just not being triggered. And yes I've wasted hours on just such an issue.  The Solution  Once you're aware of this one it's really simple to fix it, there are two options: Remove the id attribute on components that will cause some kind of submission within the forEach loop altogether and let JSF do the right thing in generating them. Then you'll be assured of uniqueness. Use the var attribute of the loop to generate a unique id for each child instance.  for example in the above case: <af:commandLink id="cl1_#{item.index}" ... />.  So one to watch out for in your upgrades to JSF 2 and one perhaps, for your coding standards today to prepare you for. For completeness, here's the reference to the underlying JSF issue that's at the heart of this: JAVASERVERFACES-1527

    Read the article

  • Thoughts on C# Extension Methods

    - by Damon
    I'm not a huge fan of extension methods.  When they first came out, I remember seeing a method on an object that was fairly useful, but when I went to use it another piece of code that method wasn't available.  Turns out it was an extension method and I hadn't included the appropriate assembly and imports statement in my code to use it.  I remember being a bit confused at first about how the heck that could happen (hey, extension methods were new, cut me some slack) and it took a bit of time to track down exactly what it was that I needed to include to get that method back.  I just imagined a new developer trying to figure out why a method was missing and fruitlessly searching on MSDN for a method that didn't exist and it just didn't sit well with me. I am of the opinion that if you have an object, then you shouldn't have to include additional assemblies to get additional instance level methods out of that object.  That opinion applies to namespaces as well - I do not like it when the contents of a namespace are split out into multiple assemblies.  I prefer to have static utility classes instead of extension methods to keep things nicely packaged into a cohesive unit.  It also makes it abundantly clear where utility methods are used in code.  I will concede, however, that it can make code a bit more verbose and lengthy.  There is always a trade-off. Some people harp on extension methods because it breaks the tenants of object oriented development and allows you to add methods to sealed classes.  Whatever.  Extension methods are just utility methods that you can tack onto an object after the fact.  Extension methods do not give you any more access to an object than the developer of that object allows, so I say that those who cry OO foul on extension methods really don't have much of an argument on which to stand.  In fact, I have to concede that my dislike of them is really more about style than anything of great substance. One interesting thing that I found regarding extension methods is that you can call them on null objects. Take a look at this extension method: namespace ExtensionMethods {   public static class StringUtility   {     public static int WordCount(this string str)     {       if(str == null) return 0;       return str.Split(new char[] { ' ', '.', '?' },         StringSplitOptions.RemoveEmptyEntries).Length;     }   }   } Notice that the extension method checks to see if the incoming string parameter is null.  I was worried that the runtime would perform a check on the object instance to make sure it was not null before calling an extension method, but that is apparently not the case.  So, if you call the following code it runs just fine. string s = null; int words = s.WordCount(); I am a big fan of things working, but this seems to go against everything I've come to know about instance level methods.  However, an extension method is really a static method masquerading as an instance-level method, so I suppose it would be far more frustrating if it failed since there is really no reason it shouldn't succeed. Although I'm not a fan of extension methods, I will say that if you ever find yourself at an impasse with a die-hard fan of either the utility class or extension method approach, then there is a common ground.  Extension methods are defined in static classes, and you call them from those static classes as well as directly from the objects they extend.  So if you build your utility classes using extension methods, then you can have it your way and they can have it theirs. 

    Read the article

  • Workflow versioning

    - by Nitra
    I believe I have a fundamental misunderstanding when it comes to workflow engines which I would appreciate if you could help me sort out. I'm not sure if my misunderstanding is specific to the workflow engine I'm using, or if it's a general misunderstanding. I happen to use Windows Workflow Foundation (WWF). TLDR-version WWF allows you to implement business processes in long-running workflows (think months or even years). When started, the workflows can't be changed. But what business process can't change at any time? And if a business process changes, wouldn't you want your software to reflect this change for already started 'instances' of the business process? What am I missing? Background In WWF you define a workflow by combining a set of activites. There are different types of activities - some of them are for flow control, such as the IfElseActivity and the WhileActivty while others allows you to perform actual tasks, such as the CodeActivity wich allows you to run .NET code and the InvokeWebServiceActivity which allows you to call web services. The activites are combined to a workflow using a visual designer. You pretty much drag-and-drop activities from a toolbox to a designer area and connect the activites to each other. The workflow and activities have input paramters, output parameters and variables. We have a single workflow which sometimes runs in a matter of a few days, but it may run for 5-6 months. WWF takes care of persisting the workflow state (what activity are we currently executing, what are the variable values and so on). So far I think WWF makes sense. Some people will prefer to implement a software representation of a business process using a visual designer over writing all of it in code. So what's the issue then? What I don't really get is the following: WWF is designed to take care of long-running workflows. But at the same time, WWF has no built-in functionality which allows you to modify the running workflows. So if you model a business process using a workflow and run that for 6 months, you better hope that the business process does not change. Because if it do, you'll have to have multiple versions of the workflow executing at the same time. This seems like a fundamental design mistake to me, but at the same time it seems more likely that I've misunderstood something. For us, this has had some real-world effects: We release new versions every month, but some workflows may run for a year. This means that we have several versions of the workflow running in parallell, in other words several versions of the business logics. This is the same as having many differnt versions of your code running in production in the same system at the same time, which becomes a bit hard to understand for users. (depending on on whether they clicked a 'Start' button 9 or 10 months ago, the software will behave differently) Our workflow refers to different types of entities and since WWF now has persisted and serialized these we can't really refactor the entities since then existing workflows can't be resumed (deserialization will fail We've received some suggestions on how to handle this When we create a new version of the workflow, cancel all running workflows and create new ones. But in our workflows there's a lot of manual work involved and if we start from scratch a lot of people has to re-do their work. Track what has been done in the workflow and when you create a new one skip activites which have already been executed. I feel that this alternative may work for simple workflows, but it becomes hairy to automatically figure out what activities to skip if there's major refactoring done to a workflow. When we create a new version of the workflow, upgrade old versions using the new WWF 4.5 functionality for upgrading workflows. But then we would have to skip using the visual designer and write code to inject activities in the right places in the workflow. According to MSDN, this upgrade functionality is only intended for minor bug fixes and not larger changes. What am I missing?

    Read the article

< Previous Page | 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006  | Next Page >