Search Results

Search found 57810 results on 2313 pages for 'http delete'.

Page 941/2313 | < Previous Page | 937 938 939 940 941 942 943 944 945 946 947 948  | Next Page >

  • SOA Community Newsletter October 2013

    - by JuergenKress
    Dear SOA & BPM Partner Community member, Our October newsletter edition focuses on Oracle OpenWorld 2013, highlights, keynotes and all presentations. Thanks to all partners who made the conference a huge success. If you could not come to San Francisco you will find all the details within this newsletter. As the newsletter edition contains a lot of content thus we have three sections - SOA, BPM & ACM, and AppAdvantage & UX. Make sure you share your content with the community, best via twitter @soacommunity #soacommunity! What is new in SOA Suite 12c? At OOW the product management team demonstrated some of the key features of the upcoming version. The important SOA topics are mobile integration and cloud integration - make sure you re-use your existing SOA platform! Bruce Tierney showcased the Agilent mobile integration and you try the new Mobile Order Management for EBS GSE Demo using middleware technology. On cloud integration the product management team presented several OOW sessions and published two whitepapers. As SOA becomes mature the awareness for SOA Governance continues to raise, Introducing Oracle Enterprise Repository Express Workflows and watch Luis Weir: Challenges to Implementing SOA Governance. Thanks to Ronald for the SOA Made Simple | Introduction to SOA series, the next article in the Industrial SOA series is SOA and User Interfaces (UI). Have you achieved successful BPM implementation? Nominate your customer references for the Gartner Business Process Management Excellence Awards 2014. Do you want to showcase the latest BPM Suite? Make sure you use the hosted BPM PS6 (11.1.1.7) demo. Do you want to become an expert in BPM Suite? Attend one of our BPM Bootcamps in Germany, Netherland, Spain or UK! If you can not make it – we offer plenty of on-demand content Advanced BPM Scenarios & BPM Architecture Topics & Process Modeling and Life Cycle & Adaptive Case Management & Smart Application Extensibility with Oracle Process Accelerators. I would also recommend to watch great introduction to Adaptive Case Management the on-demand webcast with Bruce Silver & Ajay Khanna. Thanks to Mark Foster from the A-team for the ACM article series & Leon Smiers for their blog posts. If you accomplished a SOA Suite or BPM Suite project and want to become a certified SOA or BPM expert, we are offering again free vouchers to become a certified SOA & BPM expert (limited to partners in Europe Middle East and Africa). Don't miss this opportunity and become Specialized! Best regards, Jürgen Kress To read the newsletter please visit http://tinyurl.com/soaNewsOctober2013 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: newsletter,SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress

    Read the article

  • MaxTotalSizeInBytes - Blind spots in Usage file and Web Analytics Reports

    - by Gino Abraham
    Originally posted on: http://geekswithblogs.net/GinoAbraham/archive/2013/10/28/maxtotalsizeinbytes---blind-spots-in-usage-file-and-web-analytics.aspx http://blogs.msdn.com/b/sharepoint_strategery/archive/2012/04/16/usage-file-and-web-analytics-reports-with-blind-spots.aspx In my previous post (Troubleshooting SharePoint 2010 Web Analytics), I referenced a problem that can occur when exceeding the daily partition size for the LoggingDB, which generates the ULS message “[Partition] has exceeded the max bytes”. Below, I wanted to provide some additional info on this particular issue and help identify some options if this occurs. As an aside, this post only applies if you are missing portions of Usage data - think blind spots on intermittent days or user activity regularly sparse for the afternoon/evening. If this fits your scenario - read on. But if Usage logs are outright missing, go check out my Troubleshooting post first.  Background on the problem:The LoggingDB database has a default maximum size of ~6GB. However, SharePoint evenly splits this total size into fixed sized logical partitions – and the number of partitions is defined by the number of days to retain Usage data (by default 14 days). In this case, 14 partitions would be created to account for the 14 days of retention. If the retention were halved to 7 days, the LoggingDBwould be split into 7 corresponding partitions at twice the size. In other words, the partition size is generally defined as [max size for DB] / [number of retention days].Going back to the default scenario, the “max size” for the LoggingDB is 6200000000 bytes (~6GB) and the retention period is 14 days. Using our formula, this would be [~6GB] / [14 days], which equates to 444858368 bytes (~425MB) per partition per day. Again, if the retention were halved to 7 days (which halves the number of partitions), the resulting partition size becomes [~6GB] / [7 days], or ~850MB per partition.From my experience, when the partition size for any given day is exceeded, the usage logging for the remainder of the day is essentially thrown away because SharePoint won’t allow any more to be written to that day’s partition. The only clue that this is occurring (beyond truncated usage data) is an error such as the following that gets reported in the ULS:04/08/2012 09:30:04.78    OWSTIMER.EXE (0x1E24)    0x2C98    SharePoint Foundation    Health    i0m6     High    Table RequestUsage_Partition12 has 444858368 bytes that has exceeded the max bytes 444858368It’s also worth noting that the exact bytes reported (e.g. ‘444858368’ above) may slightly vary among farms. For example, you may instead see 445226812, 439123456, or something else in the ballpark. The exact number itself doesn't matter, but this error message intends to indicates that the reporting usage has exceeded the partition size for the given day.What it means:The error itself is easy to miss, which can lead to substantial gaps in the reporting data (your mileage may vary) if not identified. At this point, I can only advise to periodically check the ULS logs for this message. Down the road, I plan to explore if [Developing a Custom Health Rule] could be leveraged to identify the issue (If you've ever built Custom Health Rules, I'd be interested to hear about your experiences). Overcoming this issue also poses a challenge, with workaround options including:Lower the retentionBecause the partition size is generally defined as [max size] / [number of retention days], the first option is to lower the number of days to retain the data – the lower the retention, the lower the divisor and thus a bigger partition. For example, halving the retention from 14 to 7 days would halve the number of partitions, but double the partition size to ~850MB (e.g. [6200000000 bytes] / [7 days] = ~850GB partitions). Lowering it to 2 days would result in two ~3GB partitions… and so on.Recreate the LoggingDB with an increased sizeThe property MaxTotalSizeInBytes is exposed by OM code for the SPUsageDefinition object and can be updated with the example PowerShell snippet below. However, updating this value has no immediate impact because this size only applies when creating a LoggingDB. Therefore, you must create a newLoggingDB for the Usage Service Application. The gotcha: this effectively deletes all prior Usage databecause the Usage Service Application can only have a single LoggingDB.Here is an example snippet to update the "Page Requests" Usage Definition:$def=Get-SPUsageDefinition -Identity "page requests" $def.MaxTotalSizeInBytes=12400000000 $def.update()Create a new Logging database and attach to the Usage Service Application using the following command: Get-spusageapplication | Set-SPUsageApplication -DatabaseServer <dbServer> -DatabaseName <newDBname> Updated (5/10/2012): Once the new database has been created, you can confirm the setting has truly taken by running the following SQL Query (be sure to replace the database name in the following query with the name provided in the PowerShell above)SELECT * FROM [WSS_UsageApplication].[dbo].[Configuration] WITH (nolock) WHERE ConfigName LIKE 'Max Total Bytes - RequestUsage'

    Read the article

  • #altnetseattle &ndash; REST Services

    - by GeekAgilistMercenary
    Below are the notes I made in the REST Architecture Session I helped kick off with Andrew. RSS, ATOM, and such needed for better discovery.  i.e. there still is a need for some type of discovery. Difficult is modeling behaviors in a RESTful way.  ??  Invoking some type of state against an object.  For instance in the case of a POST vs. a GET.  The GET is easy, comes back as is, but what about a POST, which often changes some state or something. Challenge is doing multiple workflows with stateful workflows.  How does batch work.  Maybe model the batch as a resource. Frameworks aren’t particularly part of REST, REST is REST.  But point argued that REST is modeled, or part of modeling a state machine of some sort… ? Nothing is 100% reliable w/ REST – comparisons drawn with TCP/IP.  Sufficient probability is made however for the communications, but the idea of a possible failure has to be built into the usage model of REST. Ruby on Rails / RESTfully, and others used.  What were their issues, what do they do.  ATOM feeds, object serialized, using LINQ to XML w/ this.  No state machine libraries. Idempotent areas around REST and single change POST changes are inherent in the architecture. REST – one of the constrained languages is for the interaction w/ the system.  Limiting what can be done on the resources.  - disagreement, there is no agreed upon REST verbs. Sam Ruby – RESTful services.  Expanded the verbs within REST/HTTP pushes you off the web.  Of the existing verbs POST leaves the most up for debate. Robert Reem used Factory to deal with the POST to handle the new state.  The POST identifying what it just did by the return. Different states are put into POST, so that new prospective verbs, without creating verbs for REST/HTTP can be used to advantage without breaking universal clients. Biggest issue with REST services is their lack of state, yet it is also one of their biggest strengths.  What happens is that the client takes up the often onerous task of handling all state, state machines, and other extraneous resource management.  All the GETs, POSTs, DELETEs, INSERTs get all pushed into abstraction.  My 2 cents is that this in a way ends up pushing a huge proprietary burden onto the REST services often removing the point of REST to be simple and to the point. WADL does provide discovery and some state control (sort of?) Statement made, "WADL" isn't needed.  The JSON, XML, or other client side returned data handles this. I then applied the law of 2 feet rule for myself and headed to finish up these notes, post to the Wiki, and figure out what I was going to do next.  For the original Wiki entry check it out here. I will be adding more to this post with a subsequent post.  Please do feel free to post your thoughts and ideas about this, as I am sure everyone in the session will have more for elaboration.

    Read the article

  • BizTalk Testing Series - The xpath Function

    - by Michael Stephenson
    Background While the xpath function in a BizTalk orchestration is a very powerful feature I have often come across the situation where someone has hard coded an xpath expression in an orchestration. If you have read some of my previous posts about testing I've tried to get across the general theme like test-driven or test-assisted development approaches where the underlying principle is that your building up your solution of small well tested units that are put together and the resulting solution is usually quite robust. You will be finding more bugs within your unit tests and fewer outside of your team. The thing I don't like about the xpath functions usual usage is when you come across an orchestration which has something like the below snippet in an expression or assign shape: string result = xpath(myMessage,"string(//Order/OrderItem/ProductName)"); My main issue with this is that the xpath statement is hard coded in the orchestration and you don't really know it works until you are running the orchestration. Some of the problems I think you end up with are: You waste time with lengthy debugging of the orchestration when your statement isn't working You might not know the function isn't working quite as expected because the testable unit around it is big You are much more open to regression issues if your schema changes     Approach to Testing The technique I usually follow is to hold the xpath statement as a constant in a helper class or to format a constant with a helper function to get the actual xpath statement. It is then used by the orchestration like follows. string result = xpath(myMessage, MyHelperClass.ProductNameXPathStatement); This means that because the xpath statement is available outside of the orchestration it now becomes testable in its own right. This means: I can test it in its own right I'm less likely to waste time tracking down problems caused by an error in the statement I can reduce the risk or regression issuess I'm now able to implement some testing around my xpath statements which usually are something like the following:    The test will use a sample xml file The sample will be validated against the schema The test will execute the xpath statement and then check the results are as expected     Walk-through BizTalk uses the XPathNavigator internally behind the xpath function to implement the queries you will usually use using the navigators select or evaluate functions. In the sample (link at bottom) I have a small solution which contains a schema from which I have generated a sample instance. I will then use this instance as the basis for my tests.     In the below diagram you can see the helper class which I've encapsulated my xpath expressions in, and some helper functions which will format the expression in the case of a repeating node which would want to inject an index into the xpath query.             I have then created a test class which has some functions to execute some queries against my sample xml file. An example of this is below.         In the test class I have a couple of helper functions which will execute the xpath expressions in a similar way to BizTalk. You could have a proper helper class to do this if you wanted.         You can see now in the BizTalk expression editor I can use these functions alongside the xpath function.         Conclusion I hope you can see with very little effort you can make your life much easier by testing xpath statements outside of an orchestration rather than using them directly hard coded into the orchestration.     This can also save you lots of pain longer term because your build should break if your schema changes unexpectedly causing these xpath tests to fail where as your tests around the orchestration will be more difficult to troubleshoot and workout the cause of the problem.     Sample Link The sample is available from the following link: http://code.msdn.microsoft.com/testbtsxpathfunction     Other Tools On the subject of using the xpath function, if you don't already use it the below tool is very useful for creating your xpath statements (thanks BizBert) http://www.bizbert.com/bizbert/2007/11/30/XPath+The+Hidden+Language+Of+BizTalk.aspx

    Read the article

  • Expanding the Partner Ecosystem with Third-Party Plug-ins

    - by Joe Diemer
    Oracle Enterprise Manager’s extensibility capabilities are designed to allow customers and partners to adapt Enterprise Manager for management of heterogeneous environments with Plug-ins and Connectors.  Third-party developers continue to take advantage of Oracle Enterprise Manager’s Extensibility Development Kit (EDK) to build plug-ins to Enterprise Manager 12c, such as F5’s BIG IP Plug-in and Entuity’s Eye of the Storm Network Management Plug-In.  Partners can also validate their plug-ins through the Oracle Validated Integration (OVI) program, which assures customers that the plug-in has been tested and is functionally and technically sound, is designed in a reliable and standardized manner, and operates and performs as documented.   Two very recent examples of partners which have beta versions of their plug-ins are Blue Medora's VMware vSphere plug-in and the NetApp Storage plug-in.  VMware vSphere Plug-in by Blue Medora Blue Medora, an Oracle Partner Network (OPN) “Gold” member, which just announced that it is now signing up customers to try a beta version of their new VMware vSphere plug-in for Enterprise Manager 12c.  According to Blue Medora, the vSphere plug-in monitors critical VMware metrics (CPU, Memory, Disk, Network, etc) at the Host, VM, Cluster and Resource Pool levels.  It has minimal performance impact via an “agentless” approach that requires no installation directly on VMware servers.  It has discovery capabilities for VMware Datacenters, ESX Hosts, Clusters, Virtual Machines, and Datastores.  It offers integration of native VMware Events into Enterprise Manager, and it provides over 300 VMware-related health, availability, performance, and configuration metrics.  It comes with more than 30 out-of-the-box pre-defined thresholds and can manage VMware via a series of jobs split between cluster, host and VM target types.The company reports that the Enterprise Manager 12c plug-in supports vSphere versions 4.0, 4.5 and 5.0.  Platforms supported include Linux 64-bit, Windows, AIX and Solaris SPARC and x86.  Information about the plug-in, including how to sign up for the beta, is available at their web site at http://bluemedora.com after selecting the "Products" tab. NetApp Storage Plug-in NetApp believes the combination of storage system monitoring with comprehensive management of Oracle systems with Enterprise Manager will help customers reduce the cost and complexity of managing applications that rely on NetApp storage and Oracle technologies.  So, NetApp built a plug-in and reports that it has comprehensive availability and performance information for NetApp storage systems.  Using the plug-in, Oracle Enterprise Manager customers with NetApp storage solutions can track the association between databases and storage components and thereby respond to faults and IO performance bottlenecks quickly. With the latest configuration management capabilities, one can also perform drift analysis to make sure all storage systems are configured as per established gold standards. The company is also now signing up beta customers, which can be done at the NetApp Communities site at https://communities.netapp.com/groups/netapp-storage-system-plug-in-for-oem12c-beta. Learn More about Enterprise Manager Extensibility More plug-ins from other partners are soon to come, which I'll be reporting on them here.  To learn more about Enterprise Manager and how customers and partners can build plug-ins using the EDK to manage a multi-vendor data center, go to http://oracle.com/enterprisemanager in the Heterogeneous Management solution area.  The site also lists the plug-ins available with information on how to obtain them.  More info about the Oracle Validated Integration program can be found at the OPN Enterprise Manager Knowledge Zone in the "Develop" tab.

    Read the article

  • OEG11gR2 integration with OES11gR2 Authorization with condition

    - by pgoutin
    Introduction This OES use-case has been defined originally by Subbu Devulapalli (http://accessmanagement.wordpress.com/).  Based on this OES museum use-case, I have developed the OEG11gR2 policy able to deal with the OES authorization with condition. From an OEG point of view, the way to deal with OES condition is to provide with the OES request some Environmental / Context Attributes.   Museum Use-Case  All painting in the museum have security sensors, an alarm goes off when a person comes too close a painting. The employee designated for maintenance needs to use their ID and disable the alarm before maintenance. You are the Security Administrator for the museum and you have been tasked with creating authorization policies to manage authorization for different paintings. Your first task is to understand how paintings are organized. Asking around, you are surprised to see that there isno formal process in place, so you need to start from scratch. the museum tracks the following attributes for each painting 1. Name of the work 2. Painter 3. Condition (good/poor) 4. Cost You compile the list of paintings  Name of Painting  Painter  Paint Condition  Cost  Mona Lisa  Leonardo da Vinci  Good  100  Magi  Leonardo da Vinci  Poor  40  Starry Night  Vincent Van Gogh  Poor  75  Still Life  Vincent Van Gogh  Good  25 Being a software geek who doesn’t (yet) understand art, you feel that price(or insurance price) of a painting is the most important criteria. So you feel that based on years-of-experience employees can be tasked with maintaining different paintings. You decide that paintings worth over 50 cost should be only handled by employees with over 20 years of experience and employees with less than 10 years of experience should not handle any painting. Lets us start with policy modeling. All paintings have a common set of attributes and actions, so it will be good to have them under a single Resource Type. Based on this resource type we will create the actual resources. So our high level model is: 1) Resource Type: Painting which has action manage and the following four attributes a) Name of the work b) Painter c) Condition (good/poor) d) Cost 2) To keep things simple lets use painting name for Resource name (in real world you will try to use some identifier which is unique, because in future we may end up with more than one painting which has the same name.) 3) Create Resources based on the previous table 4) Create an identity attribute Experience (Integer) 5) Create the following authorization policies a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience OES Authorization Configuration We do need to create 2 authorization policies with specific conditions a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience We don’t need an explicit policy for Deny access to all paintings for employees with less than 10 year of experience, because Oracle Entitlements Server will automatically deny if there is no matching policy. OEG Policy The OEG policy looks like the following The 11g Authorization filter configuration is similar to :  The ${PAINTING_NAME} and ${USER_EXPERIENCE} variables are initialized by the "Retrieve from the HTTP header" filters for testing purpose. That's to say, under Service Explorer, we need to provide 2 attributes "Experience" & "Painting" following the OES 11g Authorization filter described above.

    Read the article

  • WebLogic Server–Use the Execution Context ID in Applications–Lessons From Hansel and Gretel

    - by james.bayer
    I learned a neat trick this week.  Don’t let your breadcrumbs go to waste like Hansel and Gretel did!  Keep track of the code path, logs and errors for each request as they flow through the system.  Earlier this week an OTN forum post in the WLS – General category by Oracle Ace John Stegeman asked a question how to retrieve the Execution Context ID so that it could be used on an error page that a user could provide to a help desk or use to check with application administrators so they could look up what went wrong.  What is the Execution Context ID (ECID)?  Fusion Middleware injects an ECID as a request enters the system and it says with the request as it flows from Oracle HTTP Server to Oracle Web Cache to multiple WebLogic Servers to the Oracle Database. It’s a way to uniquely identify a request across tiers.  According to the documentation it’s: The value of the ECID is a unique identifier that can be used to correlate individual events as being part of the same request execution flow. For example, events that are identified as being related to a particular request typically have the same ECID value.  The format of the ECID string itself is determined by an internal mechanism that is subject to change; therefore, you should not have or place any dependencies on that format. The novel idea that I see John had was to extend this concept beyond the diagnostic information that is captured by Fusion Middleware.  Why not also use this identifier in your logs and errors so you can correlate even more information together!  Your logging might already identify the user, so why not identify the request so you filter down even more.  All you need to do inside of WebLogic Server to get ahold of this information is invoke DiagnosticConextHelper: weblogic.diagnostics.context.DiagnosticContextHelper.getContextId() This class has other helpful methods to see other values tracked by the diagnostics framework too.  This way I can see even more detail and get information across tiers. In performance profiling, this can be very handy to track down where time is being spent in code.  I’ve blogged and made videos about this before.  JRockit Flight Recorder can use the WLDF Diagnostic Volume in WLS 10.3.3+ to automatically capture and correlate lots of helpful information for each request without installing any special agents and with the out-of-the-box JRockit and WLS settings!  You can see here how information is displayed in JRockit Flight Recorder about a single request as it calls a Servlet, which calls an EJB, which gets a DB connection, which starts a transaction, etc.  You can get timings around everything and even see the SQL that is used. http://download.oracle.com/docs/cd/E21764_01/web.1111/e13714/using_flightrecorder.htm#WLDFC480 Recent versions of the WLS console also are able to visualize this data too, so it works with other JVMs besides JRockit when you turn on WLDF instrumentation. I wrote a little sample application that verified to myself that the ECID did actually cross JVM boundaries.  I invoked a Servlet in one JVM, which acted as an EJB client to Stateless Session Bean running in another JVM.  Each call returned the same ECID.  You need to turn on WLDF Instrumentation for this to work otherwise the framework returns null.  I’m glad John put me on to this API as I have some interesting ideas on how to correlate some information together.

    Read the article

  • Multitenant Design for SQL Azure: White Paper Available

    - by Herve Roggero
    Cloud computing is about scaling out all your application tiers, from web application to the database layer. In fact, the whole promise of Azure is to pay for just what you need. You need more IIS servers? No problemo... just spin another web server. You expect to double your storage needs for Azure Tables? No problemo; you are covered there too... just pay for your storage needs. But what about the database tier, SQL Azure? How do you add new databases easily, and transparently, so that your application simply uses more of SQL Azure if its needs to? Without changing a single line of code? And what if you need to scale back down? Welcome to the world of database scalability. There are many terms that describe database scalability, including data federation, multitenant designs, and even NoSQL depending on the technical solution you are implementing.  Because SQL Azure is a transactional database system, NoSQL is not really an option. However data federation and multitenant designs offer some very interesting scalability options that are worth considering. Data federation, a feature of SQL Azure that will be offered in the future, offers very interesting capabilities available natively on the SQL Azure platform. More to come in a few weeks... Multitenant designs on the other hand are design practices and technologies designed to help you reach flexible scalability options not available otherwise. The first incarnation of such a method was made available on CodePlex as an open source project (http://enzosqlshard.codeplex.com).  This project was an attempt to provide a sharding library for educational purposes.  All that sounds really cool... and really esoteric... almost a form of database "voodoo"... However after being on multiple Azure projects I am starting to see a real need. Customers want to be able to free themselves from the database tier, so that if they have 10 new customers tomorrow, all they need to do is add 2 more SQL Azure instances. It's that simple. How you achieve this, and suggested application design guidelines, are available in a white paper I just published.  The white paper offers two primary sections. The first section describes the business and technical problem at hand, and how to classify it according to specific design patterns. For example, I discuss compressed shards through schema separation. The second section offers a method for addressing the needs of a multitenant design using a new library, the big bother of the codeplex project mentioned previously (that I created earlier this year), complete with management interface and such. A Beta of this platform will be made available within weeks; as soon as the documentation will be ready.   I would like to ask you to drop me a quick email at [email protected] if you are going to download the white paper. It's not required, but it would help me get in touch with you for feedback.  You can download this white paper here:   http://www.bluesyntax.net/files/EnzoFramework.pdf . Thank you, and I am looking for feedback, thoughts and implementation opportunities.

    Read the article

  • Authenticating your windows domain users in the cloud

    - by cibrax
    Moving to the cloud can represent a big challenge for many organizations when it comes to reusing existing infrastructure. For applications that drive existing business processes in the organization, reusing IT assets like active directory represent good part of that challenge. For example, a new web mobile application that sales representatives can use for interacting with an existing CRM system in the organization. In the case of Windows Azure, the Access Control Service (ACS) already provides some integration with ADFS through WS-Federation. That means any organization can create a new trust relationship between the STS running in the ACS and the STS running in ADFS. As the following image illustrates, the ADFS running in the organization should be somehow exposed out of network boundaries to talk to the ACS. This is usually accomplish through an ADFS proxy running in a DMZ. This is the official story for authenticating existing domain users with the ACS.  Getting an ADFS up and running in the organization, which talks to a proxy and also trust the ACS could represent a painful experience. It basically requires  advance knowledge of ADSF and exhaustive testing to get everything right.  However, if you want to get an infrastructure ready for authenticating your domain users in the cloud in a matter of minutes, you will probably want to take a look at the sample I wrote for talking to an existing Active Directory using a regular WCF service through the Service Bus Relay Binding. You can use the WCF ability for self hosting the authentication service within a any program running in the domain (a Windows service typically). The service will not require opening any port as it is opening an outbound connection to the cloud through the Relay Service. In addition, the service will be protected from being invoked by any unauthorized party with the ACS, which will act as a firewall between any client and the service. In that way, we can get a very safe solution up and running almost immediately. To make the solution even more convenient, I implemented an STS in the cloud that internally invokes the service running on premises for authenticating the users. Any existing web application in the cloud can just establish a trust relationship with this STS, and authenticate the users via WS-Federation passive profile with regular http calls, which makes this very attractive for web mobile for example. This is how the WCF service running on premises looks like, [ServiceBehavior(Namespace = "http://agilesight.com/active_directory/agent")] public class ProxyService : IAuthenticationService { IUserFinder userFinder; IUserAuthenticator userAuthenticator;   public ProxyService() : this(new UserFinder(), new UserAuthenticator()) { }   public ProxyService(IUserFinder userFinder, IUserAuthenticator userAuthenticator) { this.userFinder = userFinder; this.userAuthenticator = userAuthenticator; }   public AuthenticationResponse Authenticate(AuthenticationRequest request) { if (userAuthenticator.Authenticate(request.Username, request.Password)) { return new AuthenticationResponse { Result = true, Attributes = this.userFinder.GetAttributes(request.Username) }; }   return new AuthenticationResponse { Result = false }; } } Two external dependencies are used by this service for authenticating users (IUserAuthenticator) and for retrieving user attributes from the user’s directory (IUserFinder). The UserAuthenticator implementation is just a wrapper around the LogonUser Win Api. The UserFinder implementation relies on Directory Services in .NET for searching the user attributes in an existing directory service like Active Directory or the local user store. public UserAttribute[] GetAttributes(string username) { var attributes = new List<UserAttribute>();   var identity = UserPrincipal.FindByIdentity(new PrincipalContext(this.contextType, this.server, this.container), IdentityType.SamAccountName, username); if (identity != null) { var groups = identity.GetGroups(); foreach(var group in groups) { attributes.Add(new UserAttribute { Name = "Group", Value = group.Name }); } if(!string.IsNullOrEmpty(identity.DisplayName)) attributes.Add(new UserAttribute { Name = "DisplayName", Value = identity.DisplayName }); if(!string.IsNullOrEmpty(identity.EmailAddress)) attributes.Add(new UserAttribute { Name = "EmailAddress", Value = identity.EmailAddress }); }   return attributes.ToArray(); } As you can see, the code is simple and uses all the existing infrastructure in Azure to simplify a problem that looks very complex at first glance with ADFS. All the source code for this sample is available to download (or change) in this GitHub repository, https://github.com/AgileSight/ActiveDirectoryForCloud

    Read the article

  • DISA Cross Domain Enterprise Solutions on the NetBeans Platform

    - by Geertjan
    Bray 2.0 is a tool based on the NetBeans Platform that assists in creating valid Data Flow Configuration (DFC) files. The DFC Specification was developed to provide a standardized way for defining, validating, and approving data flows for use on cross-domain guarding solutions. A DFC document specifies key entities such as security domains, guards that facilitate data between security domains, data flows that describe how data travels between security domains, filters that transform and validate the data and more. Related info: http://www.disa.mil/Services/Information-Assurance/Cross-Domain-Solutions The Bray product is in development at Fulcrum IT (http://www.fulcrumco.com). The DFC Specification and Bray were developed in support of the US Department of Defense. Bray 2.0 marks the first release of Bray on the NetBeans Platform and utilizes a number of features that are core to the NetBeans Platform: Modular plugability. Bray consumers can integrate their own tools, file types, and more into the product with relative ease. Robust UI. The NetBeans Platform intuitive UI makes it easy to access and manipulate multiple aspects of a DFC. Explorer. The Explorer is a key component that makes the DFC XML easy to traverse, edit, and find errors. Context-sensitive help. JavaHelp can be readily integrated for the product as well as all the UI within. Editors. Any external file can be added to a DFC. Users can register their own editors or use the provided NetBeans editors to edit files. Printing. The NetBeans Platform Print API makes it easy to determine what should be printed and how.   A screenshot: Bray 2.0 provides a lot of key features in developing valid, robust DFC files:  XML validation. A DFC can be validated against the DFC schema specification. DFC Check List. An interactive, minimal guide for creating a complete DFC. Summary Window. The Summary Window functions like the Navigator in NetBeans IDE. The current "item of interest" is checked against various business rules and provides the ability to quickly find and fix errors. Change Log. Bray audits every change to a DFC and places them in a change log for users to peruse. Comments. Users can optionally add comments for other users to see. Digital signatures. DFC files can be digitally signed. A signature history and signature validation is provided in Bray. Pluggable security schemes. Bray ships with plain text and IC-ISM security schemes. If needed, users can integrate additional ones.  ...and more to come! New features for Bray are constantly in development including use of the NetBeans Visual Library, language support, and more. More screenshots:

    Read the article

  • Windows Phone 7 Silverlight / XNA development talk

    - by subodhnpushpak
    Hi, I presented on Windows Phone 7 app development using Silverlight. Here are few pics from the event Windows Phone 7 development VIEW SLIDE SHOW DOWNLOAD ALL     I demonstrated the Visual studio, emulator capabilities/ features. An demo on Wp7 app communication with an OData Service, along with a demo on XNA app. There was lot of curious questions; I am listing them here because these keep on popping up again and again: 1. What tools does it takes to develop Wp7 app? Are they free? A typical WP7 app can be developed either using Silverlight or XNA. For developers, Visual Studio 2010 is a good choice as it provides an integrated development environment with lots of useful project templates; which makes the task really easy. For designers, Blend may be used to develop the UI in XAML. Both the tools are FREE (express version) to download and very intuitive to use. 2. What about the learning curve? If you know C#, (or any other programming language), learning curve is really flat. XAML (used for UI) may be new for you, but trust me; its very intuitive. Also you can use Microsoft Blend to generate the UI (XAML) for you. 3. How can I develop /test app without using actual device? How can I be sure my app runs as expected on actual device? The WP7 SDK comes along with an excellent emulator; which you can use for development/ testing on a computer. Later you can just change a setting and deploy the application on WP7. You will require Zune software for deploying the application on phone along with Developers key from WP7 marketplace. You can obtain key from marketplace by filling a form. The whole process for registering  is easy; just follow the steps on the site. 4. Which one should I use? Silverlight or XNA? Use Silverlight for enterprise/ business / utility apps. Use XNA for Games app. While each platform is capable / strong and may be used in conjunction as well; The methodologies used for development in these platforms are very different. XNA works on typical Do..While loop where as Silverlight works on event based methodology. 5. Where are the learning resources? Are they free? There is lots of stuff on WP7. Most of them are free. There is a excellent free book by Charles Petzold to download and http://www.microsoft.com/windowsphone is full of demos /todos / vidoes. All the exciting stuff was captured live and you can view it here; in case you were not able to catch it live!! @ http://livestre.am/AUfx. My talk starts from 3:19:00 timeline in the video!! Is there an app you miss on WP7? Do let me know about it and I may work on it for free !!! Keep discovering. Keep is Simple. WP7. Subodh

    Read the article

  • how to assign javascript variable value to the google analytics script? [migrated]

    - by Vinoth Prakash
    I have assigned two values in the two hidden variables at server Side and accessed those values at client side using script. I have written the google analytics code. I have set two custom variable. I need to pass two values which is stored in the javascript variables to the "value" of custom variable. I have assigned the varibales but values not displaying. please telll what error i made in the script. My aspx code <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <br /> Total Pirce&nbsp; &nbsp;: <asp:Label ID="Label1" runat="server" Text="10"></asp:Label><br /> &nbsp;Ship Price&nbsp; &nbsp; : <asp:Label ID="Label2" runat="server" Text="5"></asp:Label> <br /> ------------------<br /> Grand Total : <asp:Label ID="Label3" runat="server" Text="15"></asp:Label><br /> ------------------</div> <asp:HiddenField ID="HiddenField1" runat="server" /> <asp:HiddenField ID="HiddenField2" runat="server" /> </form> <script type="text/javascript"> var serverhid1 = document.getElementById('HiddenField1').value; var serverhid2 = document.getElementById('HiddenField2').value; alert(serverhid1) alert(serverhid2) var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-35156990-1']); //Set Custom Variable _gaq.push(['_setCustomVar', 1, 'TotalPirce', serverhid1 , 3]); _gaq.push(['_setCustomVar', 2, 'Shipping','yes', 3]); _gaq.push(['_setCustomVar', 3, 'GrandTotal',check(), 3]); _gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </body> </html> cs Code protected void Page_Load(object sender, EventArgs e) { HiddenField1.Value = Label1.Text; HiddenField2.Value = Label2.Text; }

    Read the article

  • SDL_image & OpenGL Problem

    - by Dylan
    i've been following tutorials online to load textures using SDL and display them on a opengl quad. but ive been getting weird results that no one else on the internet seems to be getting... so when i render the texture in opengl i get something like this. http://www.kiddiescissors.com/after.png when the original .bmp file is this: http://www.kiddiescissors.com/before.bmp ive tried other images too, so its not that this particular image is corrupt. it seems like my rgb channels are all jumbled or something. im pulling my hair out at this point. heres the relevant code from my init() function if ( SDL_Init(SDL_INIT_VIDEO) != 0 ) { return 1; } SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1); SDL_GL_SetAttribute( SDL_GL_RED_SIZE, 8 ); SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE, 8 ); SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, 8 ); SDL_GL_SetAttribute( SDL_GL_ALPHA_SIZE, 8 ); SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1); SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 4); SDL_SetVideoMode(WINDOW_WIDTH, WINDOW_HEIGHT, 32, SDL_HWSURFACE | SDL_GL_DOUBLEBUFFER | SDL_OPENGL); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(50, (GLfloat)WINDOW_WIDTH/WINDOW_HEIGHT, 1, 50); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnable(GL_DEPTH_TEST); glEnable(GL_MULTISAMPLE); glEnable(GL_TEXTURE_2D); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); heres the code that is called when my main player object (the one with which this sprite is associated) is initialized texture = 0; SDL_Surface* surface = IMG_Load("i.bmp"); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, surface->w, surface->h, 0, GL_RGB, GL_UNSIGNED_BYTE, surface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); SDL_FreeSurface(surface); and then heres the relevant code from my display function glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor4f(1, 1, 1, 1); glPushMatrix(); glBindTexture(GL_TEXTURE_2D, texture); glTranslatef(getCenter().x, getCenter().y, 0); glRotatef(getAngle()*(180/M_PI), 0, 0, 1); glTranslatef(-getCenter().x, -getCenter().y, 0); glBegin(GL_QUADS); glTexCoord2f(0, 0); glVertex3f(getTopLeft().x, getTopLeft().y, 0); glTexCoord2f(0, 1); glVertex3f(getTopLeft().x, getTopLeft().y + size.y, 0); glTexCoord2f(1, 1); glVertex3f(getTopLeft().x + size.x, getTopLeft().y + size.y, 0); glTexCoord2f(1, 0); glVertex3f(getTopLeft().x + size.x, getTopLeft().y, 0); glEnd(); glPopMatrix(); let me know if i left out anything important... or if you need more info from me. thanks a ton, -Dylan

    Read the article

  • WPF: Running code when Window rendering is completed

    - by Ilya Verbitskiy
    WPF is full of surprises. It makes complicated tasks easier, but at the same time overcomplicates easy  task as well. A good example of such overcomplicated things is how to run code when you’re sure that window rendering is completed. Window Loaded event does not always work, because controls might be still rendered. I had this issue working with Infragistics XamDockManager. It continued rendering widgets even when the Window Loaded event had been raised. Unfortunately there is not any “official” solution for this problem. But there is a trick. You can execute your code asynchronously using Dispatcher class.   Dispatcher.BeginInvoke(new Action(() => Trace.WriteLine("DONE!", "Rendering")), DispatcherPriority.ContextIdle, null);   This code should be added to your Window Loaded event handler. It is executed when all controls inside your window are rendered. I created a small application to prove this idea. The application has one window with a few buttons. Each button logs when it has changed its actual size. It also logs when Window Loaded event is raised, and, finally, when rendering is completed. Window’s layout is straightforward.   1: <Window x:Class="OnRendered.MainWindow" 2: xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 3: xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 4: Title="Run the code when window rendering is completed." Height="350" Width="525" 5: Loaded="OnWindowLoaded"> 6: <Window.Resources> 7: <Style TargetType="{x:Type Button}"> 8: <Setter Property="Padding" Value="7" /> 9: <Setter Property="Margin" Value="5" /> 10: <Setter Property="HorizontalAlignment" Value="Center" /> 11: <Setter Property="VerticalAlignment" Value="Center" /> 12: </Style> 13: </Window.Resources> 14: <StackPanel> 15: <Button x:Name="Button1" Content="Button 1" SizeChanged="OnSizeChanged" /> 16: <Button x:Name="Button2" Content="Button 2" SizeChanged="OnSizeChanged" /> 17: <Button x:Name="Button3" Content="Button 3" SizeChanged="OnSizeChanged" /> 18: <Button x:Name="Button4" Content="Button 4" SizeChanged="OnSizeChanged" /> 19: <Button x:Name="Button5" Content="Button 5" SizeChanged="OnSizeChanged" /> 20: </StackPanel> 21: </Window>   SizeChanged event handler simply traces that the event has happened.   1: private void OnSizeChanged(object sender, SizeChangedEventArgs e) 2: { 3: Button button = (Button)sender; 4: Trace.WriteLine("Size has been changed", button.Name); 5: }   Window Loaded event handler is slightly more interesting. First it scheduler the code to be executed using Dispatcher class, and then logs the event.   1: private void OnWindowLoaded(object sender, RoutedEventArgs e) 2: { 3: Dispatcher.BeginInvoke(new Action(() => Trace.WriteLine("DONE!", "Rendering")), DispatcherPriority.ContextIdle, null); 4: Trace.WriteLine("Loaded", "Window"); 5: }   As the result I had seen these trace messages.   1: Button5: Size has been changed 2: Button4: Size has been changed 3: Button3: Size has been changed 4: Button2: Size has been changed 5: Button1: Size has been changed 6: Window: Loaded 7: Rendering: DONE!   You can find the solution in GitHub.

    Read the article

  • Enterprise Manager in EPM 11.1.2.x...a game of hide and seek!

    - by THE
    Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} guest article: Maurice Bauhahn: Users of Oracle Hyperion Enterprise Performance Management 11.1.2.0 and 11.1.2.1 may puzzle why the URL http://<servername>:7001/em may not conjure up Enterprise Manager Fusion Middleware Control. This powerful tool has been installed by default...but WebLogic may not have been 'Extended' to allow you to call it up (we are hopeful this ‘Extend’  step will not be needed with 11.1.2.2). The explanation is on pages 425 and following of the following document: http://www.oracle.com/technetwork/middleware/bi-foundation/epm-tips-issues-1-72-427329.pdf A close look at the screen dumps in that section reveals a somewhat scary prospect, however: the non-AdminServer servlets had all failed (see the red down-arrow icons to the right of their names) after the configuration! Of course you would want to avoid that scenario! A rephrasing of the instructions might help: Ensure the WebLogic AdminServer is not running (in a default scenario that would mean port 7001 is not active). Ensure you have logged into the computer as the installing owner of EPM. Since Enterprise Manager uses a LOT of resources, be sure that there is adequate free RAM to accommodate the added load. On the machine where WebLogic AdminServer is set up (typically the Foundation Services machine), run \Oracle\Middleware\wlserver_10.3\common\bin\config (config.sh on Unix). Select the 'Extend an existing WebLogic domain' option, and click the 'Next' button. Select the domain being used by EPM System. - Typically, the default domain is created under /Oracle/Middleware/user_projects/domains and is named EPMSystem. - Click 'Next'. Under 'Extend my domain automatically to support the following added products' - place a check mark before 'Oracle Enterprise Manager - 11.1.1.0 [oracle_common]' to select it. - Continue accepting the defaults by clicking 'Next' on each page until - on the last page you click 'Extend'. - The system will grind for a few minutes while it configures (deploys?) EM. - Start the AdminServer. Sometimes there is contention in the startup order of the various servlets (resulting in some not coming up). To avoid that problem on Microsoft Windows machines you may start and stop services via the following analogous command line commands to those run on Linux/Unix (these more carefully space out timings of these events): Ensure EPM is up:\Oracle\Middleware\user_projects\epmsystem1\bin\start.bat Ensure WebLogic is up:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\startWebLogic.cmd Shut down WebLogic:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\stopWebLogic.cmd Shut down EPM:\Oracle\Middleware\user_projects\epmsystem1\bin\stop.bat  Now you should be able to more successfully troubleshoot with the EM tool:

    Read the article

  • How-to remove the close icon from task flows opened in dialogs (11.1.1.4)

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ADF bounded task flows can be opened in an external dialog and return values to the calling application as documented in chapter 19 of Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework11g: http://download.oracle.com/docs/cd/E17904_01/web.1111/b31974/taskflows_dialogs.htm#BABBAFJB   Setting the task flow call activity property Run as Dialog to true and the Display Type property to inline-popup opens the bounded task flow in an inline popup. To launch the dialog, a command item is used that references the control flow case to the task flow call activity <af:commandButton text="Lookup" id="cb6"         windowEmbedStyle="inlineDocument" useWindow="true"         windowHeight="300" windowWidth="300"         action="lookup" partialSubmit="true"/> By default, the dialog opens with a close icon in its header that does not raise a task flow return event when used for dismissing the dialog. In previous releases, the close icon could only be hidden using CSS in a custom skin definition, as explained in a previous OTN Harvest publishing (12/2010) http://www.oracle.com/technetwork/developer-tools/adf/learnmore/dec2010-otn-harvest-199274.pdf As a new feature, Oracle JDeveloper 11g (11.1.1.4) provides an option to globally remove the close icon from inline dialogs without using CSS. For this, the following managed bean definition needs to be added to the adfc-config.xml file. <managed-bean>   <managed-bean-name>     oracle$adfinternal$view$rich$dailogInlineDocument   </managed-bean-name>   <managed-bean-class>java.util.TreeMap</managed-bean-class>   <managed-bean-scope>application</managed-bean-scope>     <map-entries>       <key-class>java.lang.String</key-class>       <value-class>java.lang.String</value-class>       <map-entry>         <key>MODE</key>         <value>withoutCancel</value>       </map-entry>     </map-entries>   </managed-bean> Note the setting of the managed bean scope to be application which applies this setting to all sessions of an application.

    Read the article

  • Keeping an Eye on Your Storage

    - by Fatherjack
    There are plenty of resources that advise you about looking for signs that your storage hardware is having problems. SQL Server Alerts for 823, 824 and 825 are covered here by Paul Randall of SQL Skills: http://www.sqlskills.com/blogs/paul/a-little-known-sign-of-impending-doom-error-825/ and here by me: https://www.simple-talk.com/blogs/2011/06/27/alerts-are-good-arent-they/. Now until very recently I wasn’t aware that there was a different way to track the 823 + 824 errors. It was by complete chance that I happened to be searching about in the msdb database when I found the suspect_pages table. Running a query against it I got zero rows. This, as it turns out is a good thing. Highlighting the table name and pressing F1 got me nowhere – Is it just me or does Books Online fail to load properly for no obvious reason sometimes? So I typed the table name into the search bar and got my local version of http://msdn.microsoft.com/en-us/library/ms174425.aspx. From that we get the following description: Contains one row per page that failed with a minor 823 error or an 824 error. Pages are listed in this table because they are suspected of being bad, but they might actually be fine. When a suspect page is repaired, its status is updated in the event_type column. So, in the table we would, on healthy hardware, expect to see zero rows but on disks that are having problems the event_type column would show us what is going on. Where there are suspect pages on the disk the rows would have an event_type value of 1, 2 or 3, where those suspect pages have been restored, repaired or deallocated by DBCC then the value would be 4, 5 or 7. Having this table means that we can set up SQL Monitor to check the status of our hardware as we can create a custom metric based on the query below: USE [msdb] go SELECT COUNT(*) FROM [dbo].[suspect_pages] AS sp All we need to do is set the metric to collect this value and set an alert to email when the value is not 1 and we are then able to let SQL Monitor take care of our storage. Note that the suspect_pages table does not have any updates concerning Error 825 which the links at the top of the page cover in more detail. I would suggest that you set SQL Monitor to alert on the suspect_pages table in addition to other taking other measures to look after your storage hardware and not have it as your only precaution. Microsoft actually pass ownership and administration of the suspect_pages table over to the database administrator (Manage the suspect_pages Table (SQL Server)) and in a surprising move (to me at least) advise DBAs to actively update and archive data in it. The table will only ever contain a maximum of 1000 rows and once full, new rows will not be added. Keeping an eye on this table is pretty important, although In my opinion, if you get to 1000 rows in this table and are not already waiting for new disks to be added to your server you are doing something wrong but if you have 1000 rows in there then you need to move data out quickly because you may be missing some important events on your server.

    Read the article

  • Impressions of Pivotal Tracker

    Pivotal Tracker is a free, online agile project management system. Ive been using it recently to better communicate to customers about the current state of our project. In Pivotal Tracker, the unit of work is a story and stories are arranged into iterations or delivery cycles. Stories can be any level of granularity you want, but the idea is to use stories to communicate clearly to customers, so you dont want to write a novel. You especially dont want to write a list of detailed programming tasks. A good story for a point of sale system might be: Allow managers to override the price of an item while ringing up a customer. A less useful story: Script out the process of adding a manager flag to the user table and stage that script into the deploy directory. Stories are estimated using a point scale, by default 1, 2 or 3. Iterations are then automatically laid out by combining enough tasks to fill the point total for that period of time. You have to start with a guess on how many points your team can do in an iteration, then adjust with real data as you complete iterations. This is basic agile methodology, but where Pivotal Tracker adds value is that it automatically and graphically lays out iterations for you on your project site. This makes communication and planning easy. Compiling release notes is no longer painful as it has been clear from the outset what work is going on. While I much prefer Pivotal Trackers customer facing interface over what we used previously (TFS), I see a couple of gaps. First, I have not able to make much headway with the reporting tools. Despite my complaints about TFS, it can produce some nice reports. Second, its not clear where if at all, Id keep track of purely internal tasks. Im talking about server maintenance, cleaning up source control, checking back on some code which you never quite felt right about. Theres no purpose in cluttering up an iteration backlog with these items, but if you dont track them, you lose them. Im not sure what a good answer for that is. One gap I thought Id see, which I dont, is more granular dev tasks. If Im implementing a story, Ill write out the steps and track my progress, but really, those steps arent useful to anybody but me. The only time Ive found that level of detail really useful is when my tasks are defined at too high a level anyway or when Im working with someone who needs more coaching and might not be able to finish a story in time without some scaffolding to get them going. You can learn more about Pivotal Tracker at: http://www.pivotaltracker.com/learnmore.   --- Relevant Links --- A good intro to stories: http://www.agilemodeling.com/artifacts/userStory.htmDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Schizophrenic Ubuntu 12.10-12.04: Atheros 922 PCI WIFI is disabled in Unity but enabled in terminal - How to getit to work?

    - by zewone
    I am trying to get my PCI Wireless Atheros 922 card to work. It is disabled in Unity: both the network utility and the desktop (see screenshot http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) I tried many different advises on many different forums. Installed 12.10 instead of 12.04, enabled all interfaces... etc. I have read about the aht9 driver... The terminal shows no hw or sw lock for the Atheros card, nevertheless, it is still disabled. Nothing worked so far, the card is still disabled. Any help is much appreciated. Here are more tech details: myuser@adri1:~$ sudo lshw -C network *-network:0 DISABLED description: Wireless interface product: AR922X Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 2 bus info: pci@0000:03:02.0 logical name: wlan1 version: 01 serial: 00:18:e7:cd:68:b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.5.0-17-generic firmware=N/A latency=168 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:d8000000-d800ffff *-network:1 description: Ethernet interface product: VT6105/VT6106S [Rhine-III] vendor: VIA Technologies, Inc. physical id: 6 bus info: pci@0000:03:06.0 logical name: eth0 version: 8b serial: 00:11:09:a3:76:4a size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=via-rhine driverversion=1.5.0 duplex=half latency=32 link=no maxlatency=8 mingnt=3 multicast=yes port=MII speed=10Mbit/s resources: irq:18 ioport:d300(size=256) memory:d8013000-d80130ff *-network DISABLED description: Wireless interface physical id: 1 bus info: usb@1:8.1 logical name: wlan0 serial: 00:11:09:51:75:36 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2500usb driverversion=3.5.0-17-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bg myuser@adri1:~$ sudo rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy1: Wireless LAN Soft blocked: no Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no myuser@adri1:~$ dmesg | grep wlan0 [ 15.114235] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready myuser@adri1:~$ dmesg | egrep 'ath|firm' [ 14.617562] ath: EEPROM regdomain: 0x30 [ 14.617568] ath: EEPROM indicates we should expect a direct regpair map [ 14.617572] ath: Country alpha2 being used: AM [ 14.617575] ath: Regpair used: 0x30 [ 14.637778] ieee80211 phy0: >Selected rate control algorithm 'ath9k_rate_control' [ 14.639410] Registered led device: ath9k-phy0 myuser@adri1:~$ dmesg | grep wlan1 [ 15.119922] IPv6: ADDRCONF(NETDEV_UP): wlan1: link is not ready myuser@adri1:~$ lspci -nn | grep 'Atheros' 03:02.0 Network controller [0280]: Atheros Communications Inc. AR922X Wireless Network Adapter [168c:0029] (rev 01) myuser@adri1:~$ sudo ifconfig eth0 Link encap:Ethernet HWaddr 00:11:09:a3:76:4a inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fea3:764a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5457 errors:0 dropped:0 overruns:0 frame:0 TX packets:2548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3425684 (3.4 MB) TX bytes:282192 (282.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:590 errors:0 dropped:0 overruns:0 frame:0 TX packets:590 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:53729 (53.7 KB) TX bytes:53729 (53.7 KB) myuser@adri1:~$ sudo iwconfig wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on lo no wireless extensions. eth0 no wireless extensions. wlan1 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off myuser@adri1:~$ lsmod | grep "ath9k" ath9k 116549 0 mac80211 461161 3 rt2x00usb,rt2x00lib,ath9k ath9k_common 13783 1 ath9k ath9k_hw 376155 2 ath9k,ath9k_common ath 19187 3 ath9k,ath9k_common,ath9k_hw cfg80211 175375 4 rt2x00lib,ath9k,mac80211,ath myuser@adri1:~$ iwlist scan wlan0 Failed to read scan data : Network is down lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. wlan1 Failed to read scan data : Network is down myuser@adri1:~$ lsb_release -d Description: Ubuntu 12.10 myuser@adri1:~$ uname -mr 3.5.0-17-generic i686 ![Schizophrenic Ubuntu](http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) Any help much appreciated... Thanks, Philippe

    Read the article

  • Elo system behaves oddly in program I've created

    - by adc
    Alright, so I'm looking to build a small program (C# and XAML) that, essentially, does this: Generate array of players. Each player has a current rating and a true rating. I set current rating to 1200 as a starting point right now; I've also tried setting it to true rating and the average of the two. True rating is what their skill level actually is. Their true rating is calculated based on percentages from the current League of Legends rating system; generating an array of 970 thousand generates results very similar to the data from here: (removed due to URL limit - but trust me, the results are very similar). This array is of length specified by the user. If need be, sort the array from smallest to largest. Play X number of games, again specified by the user. This is done by taking the array of players (which is sorted by Current Rating after being created) and running through it in groups of 10. The first five are on team one, the second five are on team two. It then takes the True Rating of these players and calculates an expected chance to win using the Elo system. It generates a random double and compares it to the expected chance to win; if the number is lower, team one wins - otherwise team two wins. I then update the rating of the players via, again, the Elo system - giving the winning team a score of 1 and the losing team a score of 0. I use a K value of 36 (but have tried 12, 24, and even higher ones) and an F value of 400. After going through the entire loop of players (which I have conveniently forced to be a multiple of ten), it sorts the array - again via current rating. This, if my understanding of the Elo system is correct, runs properly. However, it doesn't seem to work. I have a running test telling me how many players of the full array are within 100 current rating of their true rating. I would expect some portion of the population to be outside this range (as probability is not always going to go in their favor), but a full 40-45% of the population is outside of this range. I also have it outputting the maximum difference between true and current rating - and I have never seen this drop below 500! It hovers between 550-600, occasionally going over or under. I'm at a loss as to what to change - I've fiddled with the K and F values, where I start all the players, etc. but nothing changes the fact that eventually a good 40% of the population is outside the range. And it isn't that I have it playing too few games - it's now run through over 60 thousand games and the problem never disappears or really fluctuates. The full C# code, including everything except the XAML file and the Player class (pastebin is being very slow and I can only post two links, so I can't link to the XAML file): http://pastebin.com/rFcZRL84 The Player class: http://pastebin.com/4cJTdTRu I guess my question is did I do anything wrong? Is there a problem with the way I implemented the system, or is it just that Riot uses a significantly modified Elo system? I don't think it's the latter, as that still wouldn't explain the massive true and current rating differences to me, however.

    Read the article

  • Can't connect to certain HTTPS sites

    - by mind.blank
    I've just moved to a new apartment and with internet connection via a router and I'm finding that I can't connect to quite a few sites that use SSL. For example trying to connect to PayPal: curl -v https://paypal.com * About to connect() to paypal.com port 443 (#0) * Trying 66.211.169.3... connected * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * Unknown SSL protocol error in connection to paypal.com:443 * Closing connection #0 curl: (35) Unknown SSL protocol error in connection to paypal.com:443 curl -v -ssl https://paypal.com gives the same output. For some sites it works: curl -v https://www.google.com * About to connect() to www.google.com port 443 (#0) * Trying 74.125.235.112... connected * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using ECDHE-RSA-RC4-SHA * Server certificate: * subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=www.google.com * start date: 2011-10-26 00:00:00 GMT * expire date: 2013-09-30 23:59:59 GMT * common name: www.google.com (matched) * issuer: C=ZA; O=Thawte Consulting (Pty) Ltd.; CN=Thawte SGC CA * SSL certificate verify ok. > GET / HTTP/1.1 > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > Host: www.google.com > Accept: */* > < HTTP/1.1 302 Found < Location: https://www.google.co.jp/ . . . I'm using Ubuntu 12.04, with Windows 7 installed as well. These sites work on Windows :( Not sure if this information helps but I ran ifconfig and got the following: eth0 Link encap:Ethernet HWaddr 1c:c1:de:bc:e2:4f inet6 addr: 2408:c3:7fff:991:686b:8d18:81b3:8dd1/64 Scope:Global inet6 addr: 2408:c3:7fff:991:1ec1:deff:febc:e24f/64 Scope:Global inet6 addr: fe80::1ec1:deff:febc:e24f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:87075 errors:0 dropped:0 overruns:0 frame:0 TX packets:54522 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:78167937 (78.1 MB) TX bytes:10016891 (10.0 MB) Interrupt:46 Base address:0x4000 eth1 Link encap:Ethernet HWaddr ac:81:12:0d:93:80 inet6 addr: fe80::ae81:12ff:fe0d:9380/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:498 TX packets:0 errors:26 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:630 errors:0 dropped:0 overruns:0 frame:0 TX packets:630 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:39592 (39.5 KB) TX bytes:39592 (39.5 KB) ppp0 Link encap:Point-to-Point Protocol inet addr:180.57.228.200 P-t-P:118.23.8.175 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 RX packets:39631 errors:0 dropped:0 overruns:0 frame:0 TX packets:22391 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:43462054 (43.4 MB) TX bytes:2834628 (2.8 MB)

    Read the article

  • Opening a new Windows from ASP.NET code behind

    - by TATWORTH
    At http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx there is an excellent post on how to open a new windows from code behind. The purists may not like it but it helped solve a problem for a client's client. Here is an update for VS2010 users: using System; using System.Web; using System.Web.UI; /// <summary> /// Response Helper for opening popup windo from code behind. /// </summary> public static class ResponseHelper {   /// <summary>   /// Redirect to popup window   /// </summary>   /// <param name="response">The response.</param>   /// <param name="url">URL to open to</param>   /// <param name="target">Target of window _self or _blank</param>   /// <param name="windowFeatures">Features such as window bar</param>   /// <remarks>   ///     <list type="bullet">   ///         <item>   /// From http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx   /// </item>   /// <item>   /// Note: If you use it outside the context of a Page request, you can't redirect to a new window. The reason is the need to call the ResolveClientUrl method on Page, which I can't do if there is no Page. I could have just built my own version of that method, but it's more involved than you might think to do it right. So if you need to use this from an HttpHandler other than a Page, you are on your own.   /// </item>   ///         <item>   /// Beware of popup blockers.   /// </item>   /// <item>   /// Note: Obviously when you are redirecting to a new window, the current window will still be hanging around. Normally redirects abort the current request -- no further processing occurs. But for these redirects, processing continues, since we still have to serve the response for the current window (which also happens to contain the script to open the new window, so it is important that it completes).   /// </item>   /// <item>   /// Sample call Response.Redirect("popup.aspx", "_blank", "menubar=0,width=100,height=100");   /// </item>   ///     </list>   /// </remarks>   public static void Redirect(this HttpResponse response, string url, string target, string windowFeatures)   {     if ((String.IsNullOrEmpty(target) || target.Equals("_self", StringComparison.OrdinalIgnoreCase)) && String.IsNullOrEmpty(windowFeatures))     {       response.Redirect(url);     }     else     {       Page page = (Page)HttpContext.Current.Handler;       if (page == null)       {         throw new InvalidOperationException("Cannot redirect to new window outside Page context.");       }       url = page.ResolveClientUrl(url);       string script;       if (!String.IsNullOrEmpty(windowFeatures))       {         script = @"window.open(""{0}"", ""{1}"", ""{2}"");";       }       else       {         script = @"window.open(""{0}"", ""{1}"");";       }       script = String.Format(script, url, target, windowFeatures);       ScriptManager.RegisterStartupScript(page, typeof(Page), "Redirect", script, true);     }   } }

    Read the article

  • Graphical driver 13.10 ATI RV630

    - by Michael Cephalus
    I started updating the distro from 13.04 to 13.10. Then I got my hands on a Radeon HD 2600. I installed the RV630 compatible Catalystdriver from the official webpage. Then xserver crashed everytime I opened a browser or vlc fx. I took notice that there was no driver listed in configuration underneath. michael@statubtunu:~$ lshw -c video WARNING: you should run this program as super-user. *-display UNCLAIMED description: VGA compatible controller product: RV630 PRO [Radeon HD 2600 PRO] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list configuration: latency=0 resources: memory:d0000000-dfffffff memory:e0500000-e050ffff ioport:1000(size=256) memory:e0000000-e001ffff i installed additional drivers from jockey and the ubuntu softwarecenter ati-driver. though that only made it to crash xserver completely and when i type: michael@statubtunu:~$ sudo startx X.Org X Server 1.14.3 Release Date: 2013-09-12 X Protocol Version 11, Revision 0 Build Operating System: Linux 3.2.0-37-generic i686 Ubuntu Current Operating System: Linux statubtunu 3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 17:26:33 UTC 2013 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.11.0-13-generic root=UUID=8fb2e395-0ea2-4f45-ac66-225696b7ce2c ro quiet splash vt.handoff=7 Build Date: 15 October 2013 09:23:29AM xorg-server 2:1.14.3-3ubuntu2 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.30.2 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 12 18:50:02 2013 (==) Using system config directory "/usr/share/X11/xorg.conf.d" Initializing built-in extension Generic Event Extension Initializing built-in extension SHAPE Initializing built-in extension MIT-SHM Initializing built-in extension XInputExtension Initializing built-in extension XTEST Initializing built-in extension BIG-REQUESTS Initializing built-in extension SYNC Initializing built-in extension XKEYBOARD Initializing built-in extension XC-MISC Initializing built-in extension SECURITY Initializing built-in extension XINERAMA Initializing built-in extension XFIXES Initializing built-in extension RENDER Initializing built-in extension RANDR Initializing built-in extension COMPOSITE Initializing built-in extension DAMAGE Initializing built-in extension MIT-SCREEN-SAVER Initializing built-in extension DOUBLE-BUFFER Initializing built-in extension RECORD Initializing built-in extension DPMS Initializing built-in extension X-Resource Initializing built-in extension XVideo Initializing built-in extension XVideo-MotionCompensation Initializing built-in extension SELinux Initializing built-in extension XFree86-VidModeExtension Initializing built-in extension XFree86-DGA Initializing built-in extension XFree86-DRI Initializing built-in extension DRI2 Loading extension GLX ERROR: could not insert 'fglrx': No such device (II) [KMS] drm report modesetting isn't supported. (EE) (EE) Backtrace: (EE) 0: /usr/bin/X (xorg_backtrace+0x49) [0xb77780b9] (EE) 1: /usr/bin/X (0xb75d8000+0x1a3e24) [0xb777be24] (EE) 2: (vdso) (__kernel_rt_sigreturn+0x0) [0xb75b540c] (EE) 3: /usr/bin/X (xf86findOption+0x2a) [0xb7681daa] (EE) 4: /usr/bin/X (xf86findOptionValue+0x23) [0xb7681f43] (EE) 5: /usr/bin/X (0xb75d8000+0x7ebfd) [0xb7656bfd] (EE) 6: /usr/bin/X (xf86ProcessOptions+0x37) [0xb7657507] (EE) 7: /usr/lib/xorg/modules/libvbe.so (vbeDoEDID+0xe7) [0xb5eb8647] (EE) 8: /usr/lib/xorg/modules/drivers/vesa_drv.so (0xb5ee7000+0x287c) [0xb5ee987c] (EE) 9: /usr/bin/X (InitOutput+0xb23) [0xb7659c33] (EE) 10: /usr/bin/X (0xb75d8000+0x2a30b) [0xb760230b] (EE) 11: /lib/i386-linux-gnu/libc.so.6 (__libc_start_main+0xf5) [0xb71ba905] (EE) 12: /usr/bin/X (0xb75d8000+0x2a908) [0xb7602908] (EE) (EE) Segmentation fault at address 0x5 (EE) Fatal server error: (EE) Caught signal 11 (Segmentation fault). Server aborting (EE) (EE) Please consult the The X.Org Foundation support at for help. (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information. (EE) (EE) Server terminated with error (1). Closing log file. This is what comes, but no GUI. Is there any way to deal with this?

    Read the article

  • Windows Azure SDK 1.3 addresses early adopter feedback

    - by Eric Nelson
    At the end of November 2010 we released a new version of the Windows Azure SDK which contains many new features driven by the great feedback of early adopters plus a shiny new portal. New Portal implemented in Silverlight: The new portal is implemented using Silverlight and replaces the (IMHO rather clunky) original HTML + JavaScript portal. It is 100% better although does still have a few bugs. Enjoy! P.S. You can if you wish still use the old portal:   New runtime functionality: The following functionality is now generally available through the Windows Azure SDK and Windows Azure Tools for Visual Studio and the new Windows Azure Management Portal: Elevated Privileges and Full IIS. You can now run a portion or all of your code in Web and Worker roles with elevated administrator privileges. The Web role now provides Full IIS functionality, which enables multiple IIS sites per Web role and the ability to install IIS modules. Remote Desktop functionality enables you to connect to a running instance of your application or service in order to monitor activity and troubleshoot common problems. Windows Server 2008 R2 Roles: Windows Azure now supports Windows Server 2008 R2 in its Web, worker and VM roles. This new support enables you to take advantage of the full range of Windows Server 2008 R2 features such as IIS 7.5, AppLocker, and enhanced command-line and automated management using PowerShell Version 2.0. New runtime functionality – in beta: Windows Azure Virtual Machine Role: Support for more types of new and existing Windows applications will soon be available with the introduction of the Virtual Machine (VM) role. You can move more existing applications to Windows Azure, reducing the need to make costly code or deployment changes. Extra Small Windows Azure Instance, which is priced at $0.05 per compute hour, provides developers with a cost-effective training and development environment. Developers can also use the Extra Small instance to prototype cloud solutions at a lower cost. Windows Azure Connect: (formerly Project Sydney), which enables a simple and easy-to-manage mechanism to set up IP-based network connectivity between on-premises and Windows Azure resources, is the first Windows Azure Virtual Network feature that we’re making available as a CTP. You can sign up for any of the betas via the Windows Azure Management Portal. Improved processes and simplified operations New portal! (see above) Access to new diagnostic information including the ability to click on a role to see role type, deployment time and last reboot time A new sign-up process that dramatically reduces the number of steps needed to sign up for Windows Azure. New scenario based Windows Azure Platform forums to help answer questions and share knowledge more efficiently. Multiple Service Administrators: Windows Azure now supports multiple Windows Live IDs to have administrator privileges on the same Windows Azure account. The objective is to make it easy for a team to work on the same Windows Azure account while using their individual Windows Live IDs.   Related Links Please also let us know through Microsoft Platform Ready if and when you intend to build an application using the Windows Azure Platform. Or indeed if you already have (Well done). You will get access to some great benefits if you do (more on that in a future post). It also really helps us better understand the demand out there which directly impacts how we will plan the next six months of activities around the Windows Azure Platform. Visit Microsoft Platform Ready to tell us about your plans for your applications UK based? Interested in the Windows Azure Platform? Join http://ukazure.ning.com Get started with the Windows Azure Platform http://bit.ly/startazure

    Read the article

  • Sd card bigger than 2gb is not recognized in ubuntu 12.04

    - by dex1
    When I insert a card up to 2gb it is immediately seen by the system but if try it with bigger one it's not seen. I presume the issue is not due to the card reader itself as it reads all cards under windows 7 but due to linux driver. I could see some people having similar issues but no solution. Any help appreciated. GParted doesnt see cards bigger than 2gb. After insertion small card ubuntu@ubuntu:~$ dmesg [10169.384481] mmc0: new SD card at address a95c [10169.384870] mmcblk0: mmc0:a95c SD016 14.0 MiB [10169.386715] mmcblk0: p1 everything worked fine then I removed the small one and put 8gb, waited for 2min [10295.736422] mmc0: card a95c removed [10362.448383] sdhci: Switching to 1.8V signalling voltage failed, retrying with S18R set to 0 [10372.480076] mmc0: Timeout waiting for hardware interrupt. [10382.496146] mmc0: Timeout waiting for hardware interrupt. [10392.512149] mmc0: Timeout waiting for hardware interrupt. [10402.528145] mmc0: Timeout waiting for hardware interrupt. [10402.529267] mmc0: error -110 whilst initialising SD card [10402.748807] sdhci: Switching to 1.8V signalling voltage failed, retrying with S18R set to 0 [10412.768063] mmc0: Timeout waiting for hardware interrupt. [10422.784051] mmc0: Timeout waiting for hardware interrupt. [10432.800076] mmc0: Timeout waiting for hardware interrupt. [10442.816067] mmc0: Timeout waiting for hardware interrupt. [10442.817165] mmc0: error -110 whilst initialising SD card [10443.040805] sdhci: Switching to 1.8V signalling voltage failed, retrying with S18R set to 0 [10453.056145] mmc0: Timeout waiting for hardware interrupt. [10463.072139] mmc0: Timeout waiting for hardware interrupt. [10473.088050] mmc0: Timeout waiting for hardware interrupt. [10483.104046] mmc0: Timeout waiting for hardware interrupt. [10483.104107] mmc0: error -110 whilst initialising SD card [10483.328960] sdhci: Switching to 1.8V signalling voltage failed, retrying with S18R set to 0 [10493.344144] mmc0: Timeout waiting for hardware interrupt. ubuntu@ubuntu:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (primary) (rev 03) 00:02.1 Display controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (secondary) (rev 03) 00:1a.0 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #5 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 4 (rev 03) 00:1c.4 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 5 (rev 03) 00:1c.5 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 6 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev f3) 00:1f.0 ISA bridge: Intel Corporation 82801HM (ICH8M) LPC Interface Controller (rev 03) 00:1f.1 IDE interface: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 03) 07:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8072 PCI-E Gigabit Ethernet Controller (rev 16) 0a:01.0 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02) 0a:01.2 SD Host controller: O2 Micro, Inc. Integrated MMC/SD Controller (rev 02) 0a:01.3 Mass storage controller: O2 Micro, Inc. Integrated MS/xD Controller (rev 01) Same cards, same machine (same reader) only different OS(win7) work flawlessly. Some interesting reading I came across but is Chinese for me http://www.mail-archive.com/[email protected]/msg14598.html and another bit http://article.gmane.org/gmane.linux.kernel.mmc/11973/match=sd+card+not+recognized

    Read the article

< Previous Page | 937 938 939 940 941 942 943 944 945 946 947 948  | Next Page >