Search Results

Search found 901 results on 37 pages for 'greg bala'.

Page 4/37 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • It's not just “Single Sign-on” by Steve Knott (aurionPro SENA)

    - by Greg Jensen
    It is true that Oracle Enterprise Single Sign-on (Oracle ESSO) started out as purely an application single sign-on tool but as we have seen in the previous articles in this series the product has matured into a suite of tools that can do more than just automated single sign-on and can also provide rapidly deployed, cost effective solution to many demanding password management problems. In the last article of this series I would like to discuss three cases where customers faced password scenarios that required more than just single sign-on and how some of the less well known tools in the Oracle ESSO suite “kitbag” helped solve these challenges. Case #1 One of the issues often faced by our customers is how to keep their applications compliant. I had a client who liked the idea of automated single sign-on for most of his applications but had a key requirement to actually increase the security for one specific SOX application. For the SOX application he wanted to secure access by using two-factor authentication with a smartcard. The problem was that the application did not support two-factor authentication. The solution was to use a feature from the Oracle ESSO suite called authentication manager. This feature enables you to have multiple authentication methods for the same user which in this case was a smartcard and the Windows password.  Within authentication manager each authenticator can be configured with a security grade so we gave the smartcard a high grade and the Windows password a normal grade. Security grading in Oracle ESSO can be configured on a per application basis so we set the SOX application to require the higher grade smartcard authenticator. The end result for the user was that they enjoyed automated single sign-on for most of the applications apart from the SOX application. When the SOX application was launched, the user was required by ESSO to present their smartcard before being given access to the application. Case #2 Another example solving compliance issues was in the case of a large energy company who had a number of core billing applications. New regulations required that users change their password regularly and use a complex password. The problem facing the customer was that the core billing applications did not have any native user password change functionality. The customer could not replace the core applications because of the cost and time required to re-develop them. With a reputation for innovation aurionPro SENA were approached to provide a solution to this problem using Oracle ESSO. Oracle ESSO has a password expiry feature that can be triggered periodically based on the timestamp of the users’ last password creation therefore our strategy here was to leverage this feature to provide the password change experience. The trigger can launch an application change password event however in this scenario there was no native change password feature that could be launched therefore a “dummy” change password screen was created that could imitate the missing change password function and connect to the application database on behalf of the user. Oracle ESSO was configured to trigger a change password event every 60 days. After this period if the user launched the application Oracle ESSO would detect the logon screen and invoke the password expiry feature. Oracle ESSO would trigger the “dummy screen,” detect it automatically as the application change password screen and insert a complex password on behalf of the user. After the password event had completed the user was logged on to the application with their new password. All this was provided at a fraction of the cost of re-developing the core applications. Case #3 Recent popular initiatives such as the BYOD and working from home schemes bring with them many challenges in administering “unmanaged machines” and sometimes “unmanageable users.” In a recent case, a client had a dispersed community of casual contractors who worked for the business using their own laptops to access applications. To improve security the around password management the security goal was to provision the passwords directly to these contractors. In a previous article we saw how Oracle ESSO has the capability to provision passwords through Provisioning Gateway but the challenge in this scenario was how to get the Oracle ESSO agent to the casual contractor on an unmanaged machine. The answer was to use another tool in the suite, Oracle ESSO Anywhere. This component can compile the normal Oracle ESSO functionality into a deployment package that can be made available from a website in a similar way to a streamed application. The ESSO Anywhere agent does not actually install into the registry or program files but runs in a folder within the user’s profile therefore no local administrator rights are required for installation. The ESSO Anywhere package can also be configured to stay persistent or disable itself at the end of the user’s session. In this case the user just needed to be told where the website package was located and download the package. Once the download was complete the agent started automatically and the user was provided with single sign-on to their applications without ever knowing the application passwords. Finally, as we have seen in these series Oracle ESSO not only has great utilities in its own tool box but also has direct integration with Oracle Privileged Account Manager, Oracle Identity Manager and Oracle Access Manager. Integrated together with these tools provides a complete and complementary platform to address even the most complex identity and access management requirements. So what next for Oracle ESSO? “Agentless ESSO available in the cloud” – but that will be a subject for a future Oracle ESSO series!                                                                                                                               

    Read the article

  • Oracle Enterprise Pack for Eclipse (OEPE) 11.1.1.7 adds Oracle ADF Tooling Support

    - by greg.stachnick
    Oracle Enterprise Pack for Eclipse (OEPE) 11.1.1.7 is now available and includes first-time support for Oracle ADF development in Eclipse. Installers for OEPE 11.1.1.7 as well as Eclipse Update instructions can be found on the OEPE downloads page. Here is an overview of the new features of OEPE 11.1.1.7: Support for Oracle ADF Faces Oracle Enterprise Pack for Eclipse (OEPE) 11.1.1.7 now provides support for development with Oracle ADF 11.1.1.4. These features focus on enablement and configuration of the ADF Runtime with Eclipse and WebLogic Server 10.3.4 as well as design time tools for ADF Faces. A new OEPE 11.1.1.7 installer bundles WebLogic Server 10.3.4, Coherence 3.6, and Oracle ADF 11.1.1.4. New Server Extensions allow you to download and install the ADF Runtime libraries into an existing WebLogic Server from within Eclipse. New Project Templates and Facets are available for ADF Faces development (ADF Web). New ADF validators with QuickFix options will check common descriptors for the appropriate ADF configurations. ADF-enabled JSP templates supporting multiple layouts are available under the New menu. New Remote and Local run/deploy support for ADF applications to WebLogic Server 10.3.4 The Palette now supports drag and drop of ADF Faces and Data Visualization Tools (DVT) tags and includes editors for eash tag configuration. The Eclipse Property Sheet has been enhanced to provide advanced ADF tag configuration. AppXRay dependency engine provides improved validation, code completion, and hyperlink navigation for ADF Faces and DVT Tags The Eclipse Web Page Editor enables a more productive source editing experience for ADF Faces. UI Consolidation for WebLogic Server Tools Oracle Enterprise Pack for Eclipse 11.1.1.7 includes a more streamlined UI for WebLogic Server development. You can now view deployments within the Servers view to understand which modules have been deployed to the domain. The MBean Browser View has been merged with the Servers view enabling easier access to MBean values while still allowing Drag and Drop to WLST scripts. WebLogic Server configuration options have been moved to the Properties window, right-click a server configuration and select Properties.

    Read the article

  • How to bill a client for frequently-interrupted time

    - by Greg
    I find that when I'm working on hourly-billable projects (in particular, those that are research/design/architecture-oriented as opposed to straight coding) that I'm easily distracted by any number of things (email, grab a drink (loss of focus, but nature happens), link off the webpage I was reading, wandering mind (easy when the job calls for a lot of thinking), etc.) This results in very fragmented time, far too incremental IMO to accurately track with a timeclock, and some time very gray. I frequently end up billing for only some fraction of the elapsed time I spent in order to feel fair, but sometimes it takes a really long time to put in an 8-hour day. By contrast, when I've worked for salary I've not worried about whether I'm actively working at any given minute, I just get the job done, and I've never had anything but stellar reviews/feedback from past salaried employers, so I think I get the job done well. I personally believe in an 80/20 cycle: I get 80% of my work done during an inspired 20% of my time. But I have to screw around the other 80% of the time in order to get that first 20%. So the question: what billing/time-tracking policy can I adopt in order to be fair to my hourly customers without having to write off my own less-productive 80% that a salaried employer is willing to overlook in light of the complete package? Note: This question is not about how to be more productive or focused. It's about how to work around whatever salient limitations that I have in a way that's both fair to me and to my customers. Update: A little clarification (to pre-emptively stop some righteous indignation): I currently have a half dozen different project/client groups. It's not a great situation and I'm working at reducing it down to two, but that's my current reality. It's very easy to get off on a thread related to a different project than the one I'm clocking, and I'm not always conscious of it at the time. [I did not intend the question to mean that I was off playing games or making personal calls, etc., and have adjusted wording above to be clearer. Most of the time. I am only human, and sometimes the mind does force you to take a break! :-)]

    Read the article

  • Gartner PCC Summit, Baltimore - Oracle's Take

    - by [email protected]
    Back from last week's trip to the Gartner PCC Summit in Baltimore, Andy MacMillan and Ajay Gandhi share their impressions of the conference. According to Andy and Ajay: Interest in the sector is increasing - attendance at this year's conference was up by more than 50 percent The discussion at the conference this year shifted from a focus on what the tools are to how the tools can transform organizations and help build businesses Conference attendees were interested in taking a platform approach and looking to bring multiple tools together to solve problems and simplify business processes. If you are interested in learning more about the Bureau of Indian Affairs' deployment showcased in Ajay's session at the Gartner PCC Summit, come back soon - a detailed post is on its way.

    Read the article

  • How do I enable WebGL sites with Firefox 4 Beta?

    - by Greg Grossmeier
    I am using Firefox 4.0b12 from the Mozilla Team's "Firefox Next" PPA. My about:config has webgl.enabled_for_all_sites set to true as most guides recommend that and also say that is all that is needed. It doesn't work for me. The main Mozilla Demos page ("Web O' Wonder") gives me: Unfortunately, while your browser supports WebGL, your video drivers may be too old. To view any of the demos tagged with WebGL, try updating your drivers at NVIDIA, AMD, and Intel. You can still watch screencasts of the WebGL demos or fully experience our other non-WebGL demos without updating. This Mozilla demo gives me "No WebGL context found."

    Read the article

  • OT: Airlines and Podcasts

    - by Greg Low
    Those that know me know that I spend an inordinate amount of time on airlines. I also love podcasts, as you can tell from my www.sqldownunder.com site and show. So anything that combines the two is just awesome. Fly With Joe fits that perfectly. Joe D'Eon provides great insights in his show. I was sad last year that he hadn't posted many shows. I've also been quiet for a couple of months (but that's about to change with a bunch of SQL Server 2008 R2 shows). But I've been so pleased that Joe's got...(read more)

    Read the article

  • Standard way of allowing general XML data

    - by Greg Jackson
    I'm writing a data gathering and reporting application that takes XML files as input, which will then be read, processed, and stored in a strongly-typed database. For example, an XML file for a "Job" might look like this: <Data type="Job"> <ID>12345</ID> <JobName>MyJob</JobName> <StartDate>04/07/2012 10:45:00 AM</StartDate> <Files> <File name="a.jpeg" path="images\" /> <File name="b.mp3" path="music\mp3\" /> </Files> </Data> I'd like to use a schema to have a standard format for these input files (depending on what type of data is being used, for example "Job", "User", "View"), but I'd also like to not fail validation if there is extra data provided. For example, perhaps a Job has additional properties such as "IsAutomated", "Requester", "EndDate", and so on. I don't particularly care about these extra properties. If they are included in the XML, I'll simply ignore them when I'm processing the XML file, and I'd like validation to do the same, without having to include in the schema every single possible property that a customer might provide me with. Is there a standard way of providing such a schema, or of allowing such a general XML file that can still be validated without resorting to something as naïve (and potentially difficult to deal with) as the below? <Data type="Job"> <Data name="ID">12345</Data> . . . <Data name="Files"> <Data name="File"> <Data name="Filename">a.jpeg</Data> <Data name="path">images</Data> . . . </Data> </Data>

    Read the article

  • SolidQ Journal for January (free and available now)

    - by Greg Low
    I've been travelling in Tasmania for a week or so and didn't get to post about the SolidQ Journal for January. It's our free monthly journal at: http://www.solidq.com/sqj . I promised to write a part two on controlling the security context of stored procedures but didn't get time to write this month. I'll rectify that very soon. However, in the meantime, the rest of the team have done a great job again. Guillermo Bas has described how to access SharePoint 2010 data through Windows Phone 7. Marino...(read more)

    Read the article

  • A deque based on binary trees

    - by Greg Ros
    This is a simple immutable deque based on binary trees. What do you think about it? Does this kind of data structure, or possibly an improvement thereof, seem useful? How could I improve it, preferably without getting rid of its strengths? (Not in the sense of more operations, in the sense of different design) Does this sort of thing have a name? Red nodes are newly instantiated; blue ones are reused. Nodes aren't actually red or anything, it's just for emphasis.

    Read the article

  • SQL Down Under podcast 60 with SQL Server MVP Adam Machanic

    - by Greg Low
    I managed to get another podcast posted over the weekend. Late last week, I managed to get a show recorded with Adam Machanic. Adam's always fascinating. In this show, he's talking about what he's found regarding increasing query performance using parallelism. Late in the show, he gives his thoughts on a number of topics related to the upcoming SQL Server 2014.Enjoy!The show is online now: http://www.sqldownunder.com/Podcasts 

    Read the article

  • SQL Server Data Tools–BI for Visual Studio 2013 Re-released

    - by Greg Low
    Customers used to complain that the tooling for creating BI projects (Analysis Services MD and Tabular, Reporting Services, and Integration services) has been based on earlier versions of Visual Studio than the ones they were using for their other work in Visual Studio (such as C#, VB, and ASP.NET projects). To alleviate that problem, the shipment of those tools has been decoupled from the shipment of the SQL Server product. In SQL Server 2014, the BI tooling isn’t even included in the released version of SQL Server. This allows the team to keep up-to-date with the releases of Visual Studio. A little while back, I was really pleased to see that the Visual Studio 2013 update for SSDT-BI (SQL Server Data Tools for Business Intelligence) had been released. Unfortunately, they then had to be withdrawn. The good news is that they’re back and you can get the latest version from here: http://www.microsoft.com/en-us/download/details.aspx?id=42313

    Read the article

  • Protecting PDF files and XDO.CFG

    - by Greg Kelly
    Protecting PDF files and XDO.CFG Security related properties can be overridden at runtime through PeopleCode as all other XMLP properties using the SetRuntimeProperties() method on the ReportDefn class. This is documented in PeopleBooks. Basically this method need to be called right before calling the processReport() method: . . &asPropName = CreateArrayRept("", 0); &asPropValue = CreateArrayRept("", 0); &asPropName.Push("pdf-open-password"); &asPropValue.Push("test"); &oRptDefn.SetRuntimeProperties(&asPropName, &asPropValue); &oRptDefn.ProcessReport(&sTemplateId, %Language_User, &dAsOfDate, &sOutputFormat); Of course users should not hardcode the password value in the code, instead, if password is stored encrypted in the database or somewhere else, they can use Decrypt() api

    Read the article

  • Taking the training wheels off: Accelerating the Business with Oracle IAM by Brian Mozinski (Accenture)

    - by Greg Jensen
    Today, technical requirements for IAM are evolving rapidly, and the bar is continuously raised for high performance IAM solutions as organizations look to roll out high volume use cases on the back of legacy systems.  Existing solutions were often designed and architected to support offline transactions and manual processes, and the business owners today demand globally scalable infrastructure to support the growth their business cases are expected to deliver. To help IAM practitioners address these challenges and make their organizations and themselves more successful, this series we will outline the: • Taking the training wheels off: Accelerating the Business with Oracle IAM The explosive growth in expectations for IAM infrastructure, and the business cases they support to gain investment in new security programs. • "Necessity is the mother of invention": Technical solutions developed in the field Well proven tricks of the trade, used by IAM guru’s to maximize your solution while addressing the requirements of global organizations. • The Art & Science of Performance Tuning of Oracle IAM 11gR2 Real world examples of performance tuning with Oracle IAM • No Where to go but up: Extending the benefits of accelerated IAM Anything is possible, compelling new solutions organizations are unlocking with accelerated Oracle IAM Let’s get started … by talking about the changing dynamics driving these discussions. Big Companies are getting bigger everyday, and increasingly organizations operate across state lines, multiple times zones, and in many countries or continents at the same time.  No longer is midnight to 6am a safe time to take down the system for upgrades, to run recon’s and import or update user accounts and attributes.  Further IT organizations are operating as shared services with SLA’s similar to telephone carrier levels expected by their “clients”.  Workers are moved in and out of roles on a weekly, daily, or even hourly rate and IAM is expected to support those rapid changes.  End users registering for services during business hours in Singapore are expected their access to be green-lighted in custom apps hosted in Portugal within the hour.  Many of the expectations of asynchronous systems and batched updates are not adequate and the number and types of users is growing. When organizations acted more like independent teams at functional or geographic levels it was manageable to have processes that relied on a handful of people who knew how to make things work …. Knew how to get you access to the key systems to get your job done.  Today everyone is expected to do more with less, the finance administrator previously supporting their local Atlanta sales office might now be asked to help close the books for the Johannesburg team, and access certification process once completed monthly by Joan on the 3rd floor is now done by a shared pool of resources in Sao Paulo.   Fragmented processes that rely on institutional knowledge to get access to systems and get work done quickly break down in these scenarios.  Highly robust processes that have automated workflows for connected or disconnected systems give organizations the dynamic flexibility to share work across these lines and cut costs or increase productivity. As the IT industry computing paradigms continue to change with the passing of time, and as mature or proven approaches become clear, it is normal for organizations to adjust accordingly. Businesses must manage identity in an increasingly hybrid world in which legacy on-premises IAM infrastructures are extended or replaced to support more and more interconnected and interdependent services to a wider range of users. The old legacy IAM implementation models we had relied on to manage identities no longer apply. End users expect to self-request access to services from their tablet, get supervisor approval over mobile devices and email, and launch the application even if is hosted on the cloud, or run by a partner, vendor, or service provider. While user expectations are higher, they are also simpler … logging into custom desktop apps to request approvals, or going through email or paper based processes for certification is unacceptable.  Users expect security to operate within the paradigm of the application … i.e. feel like the application they are using. Citizen and customer facing applications have evolved from every where, with custom applications, 3rd party tools, and merging in from acquired entities or 3rd party OEM’s resold to expand your portfolio of services.  These all have their own user stores, authentication models, user lifecycles, session management, etc.  Often the designers/developers are no longer accessible and the documentation is limited.  Bringing together underlying directories to scale for growth, and improve user experience is critical for revenue … but also for operations. Job functions are more dynamic.... take the Olympics for example.  Endless organizations from corporations broadcasting, endorsing, or marketing through the event … to non-profit athletic foundations and public/government entities for athletes and public safety, all operate simultaneously on the world stage.  Each organization needs to spin up short-term teams, often dealing with proprietary information from hot ads to racing strategies or security plans.  IAM is expected to enable team’s to spin up, enable new applications, protect privacy, and secure critical infrastructure.  Then it needs to be disabled just as quickly as users go back to their previous responsibilities. On a more technical level … Optimized system directory; tuning guidelines and parameters are needed by businesses today. Business’s need to be making the right choices (virtual directories) and considerations via choosing the correct architectural patterns (virtual, direct, replicated, and tuning), challenge is that business need to assess and chose the correct architectural patters (centralized, virtualized, and distributed) Today's Business organizations have very complex heterogeneous enterprises that contain diverse and multifaceted information. With today's ever changing global landscape, the strategic end goal in challenging times for business is business agility. The business of identity management requires enterprise's to be more agile and more responsive than ever before. The continued proliferation of networking devices (PC, tablet, PDA's, notebooks, etc.) has caused the number of devices and users to be granted access to these devices to grow exponentially. Business needs to deploy an IAM system that can account for the demands for authentication and authorizations to these devices. Increased innovation is forcing business and organizations to centralize their identity management services. Access management needs to handle traditional web based access as well as handle new innovations around mobile, as well as address insufficient governance processes which can lead to rouge identity accounts, which can then become a source of vulnerabilities within a business’s identity platform. Risk based decisions are providing challenges to business, for an adaptive risk model to make proper access decisions via standard Web single sign on for internal and external customers,. Organizations have to move beyond simple login and passwords to address trusted relationship questions such as: Is this a trusted customer, client, or citizen? Is this a trusted employee, vendor, or partner? Is this a trusted device? Without a solid technological foundation, organizational performance, collaboration, constituent services, or any other organizational processes will languish. A Single server location presents not only network concerns for distributed user base, but identity challenges. The network risks are centered on latency of the long trip that the traffic has to take. Other risks are a performance around availability and if the single identity server is lost, all access is lost. As you can see, there are many reasons why performance tuning IAM will have a substantial impact on the success of your organization.  In our next installment in the series we roll up our sleeves and get into detailed tuning techniques used everyday by thought leaders in the field implementing Oracle Identity & Access Management Solutions.

    Read the article

  • Use a partial in a partial?

    - by Greg Wallace
    I'm a Rails newbie, so bear with me. I have a few places, some pages, some partials that use: <%= link_to "delete", post, method: :delete, data: { confirm: "You sure?" }, title: post.content %> Would it make sense to make this a partial since it is used repeatedly, sometimes in other partials too? Is it o.k. to put partials in partials?

    Read the article

  • SOA Suite 11g Dynamic Payload Testing with soapUI Free Edition

    - by Greg Mally
    Overview Many web service developers use soapUI for various tests like: smoke test, unit test, and load testing because you can get a free edition that is fairly robust. However, if you need to venture into more complex testing that requires a dynamic payload, then the free edition doesn't necessarily make it easy. This feature does exist in soapUI, but for obvious reasons it is in the Pro version. In this blog I will show you how to use soapUI free edition for dynamic payloads in a simplified example. Hopefully this will open the doors for you to expand into more complex scenarios. The following assumes that you have a working knowledge of soapUI and will not go into concepts like setting up a project etc. For the basics, please review the documentation for soapUI: http://www.soapui.org/Getting-Started/. Additionally, we will be using asynchronous web services and you can review the setup for this in my blog: SOA Suite 11g Asynchronous Testing with soapUI. Features in soapUI Free Edition Relating to this Topic The soapUI test tool provides a very feature rich environment that can do many things provided you are willing to go beyond point and click. For this example, we will be leveraging just a couple features for our dynamic payload example: Test Case Properties Scripting with Groovy Basically, we will be using a property as a global variable and we will manipulate that property using a Groovy script. Setting Up Our Property Properties are available throughout soapUI and here is a snippet from the soapUI website defining the locations: Projects : for handling Project scope values, for example a subscription ID TestSuite : for handling TestSuite scoped values, can be seen as "arguments" to a TestSuite TestCases : for handling TestCase scoped values, can be seen as "arguments" to a TestCase Properties TestStep : for providing local values/state within a TestCase Local TestStep properties : several TestStep types maintain their own list of properties specific to their functionality : DataSource, DataSink, Run TestCase MockServices : for handling MockService scoped values/arguments MockResponses : for handling MockResponse scoped values Global Properties : for handling Global properties, optionally from an external source For our example, we will be defining a custom property in a TestCase called SimpleAsyncPayload. The property can be created in either the Custom Properties tab located at the bottom of the Navigator panel when the TestCase is selected in the Navigator or the Properties label in the TestCase editor: Navigator Panel TestCase Editor You will notice that I set a value of “0” for the custom property. For this simplified example, we will need to retrieve that value and manipulate it prior to making the web service request invocation. In order to accomplish this, we will need to get Groovy ;) Let's Get Groovy We will now add a new Groovy Script step to the TestCase called Manipulate Payload: TestCase Editor > Append Step > Groovy Script Once we have added the Groovy Script step to our TestCase, we can open the Groovy Script editor to add the code to: Get the current value of the property we created called SimpleAsyncPayload. Convert the value of the property to an integer. Increment the value. Store the incremented value back into the TestCase property called SimpleAsyncPayload. The script should look something like the following: Groovy Script Editor – Manipulate Payload At this point we can test the script to see if it is working by simply running the TestCase (left-click on the green triangle in the upper left-hand corner of the TestCase editor). To verify if it ran correctly, we can look at the value of the SimpleAsyncPayload property which should now be 1: TestCase Editor – Run Results All that is left to complete the TestCase is to append another step of type Test Request. The information required to append the request is a name and an operation to invoke. In this example we will use the default name and select the SimpleAsyncBPELProcessBingd -> process as the operation (any other information being requested, simply use the defaults unless you are calling an asynchronous operation then do not add any assertions). We are now in familiar ground with the Test Request editor. Depending upon the type of operation you are invoking (synchronous or asynchronous), please update the request with the necessary information (e.g., callback information for asynchronous operations). We will now tweak the Test Request payload to retrieve the value of the SimpleAsyncPayload property. The soapUI editor makes this very simple: right-click in the payload and navigate to the property (e.g., right-click > Get Data.. > TestCase: [Groovy TestCase] > Property [SimpleAsyncPayload]): Test Request Editor – Insert Property Value Your payload should now look something like the following: Test Request Editor – Inserted Property Value Just like before, we are now ready to run the TestCase. If everything goes as expected we should see a response like the following: Message Viewer – Results of TestCase Run We are now setup to be able to run a stress test where the payload will change for each request. This simple example can be expanded to include multiple payload values, complex calculations in the scripts, or whatever can be done via the soapUI scripting. Hopefully you have found this useful and happy testing to you :)

    Read the article

  • WPA2 authentication fails using USB network devices (Linksys and Rosewill)

    - by Greg Youtz
    Decided to reduce the clutter in the house and replace a wired connection with a wireless one on my wife's system using USB network device Rosewill RNX-X1. I can see and connect to unprotected network, but WPA2 authentication repeatedly fails. Tried the same with a Linksys USB network adapter. Both failed to authenticate. Worth noting that I recently switched from Comcast to CenturyLink and so switched routers. The system connected successfully to previous router (Linksys EA4500) using WPA2. Would think it is the router (Actiontec C1000A) but all other devices (TV, iPad, Windows, Blackberry, and Squeezebox) connect ok. Would appreciate some diagnostic guidance and insight (phrased for a newbie!) Tests to date: sudo lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 01 serial: 00:e0:4d:30:40:a1 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=N/A latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:47 ioport:ac00(size=256) memory:fdcff000-fdcfffff memory:fdb00000-fdb1ffff *-network description: Wireless interface physical id: 1 bus info: usb@1:2 logical name: wlan1 serial: 00:02:6f:bd:30:a0 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2800usb driverversion=3.2.0-31-generic firmware=0.29 link=no multicast=yes wireless=IEEE 802.11bgn sudo lspci -v 00:00.0 RAM memory: NVIDIA Corporation MCP67 Memory Controller (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 Capabilities: [44] HyperTransport: Slave or Primary Interface Capabilities: [dc] HyperTransport: MSI Mapping Enable+ Fixed- 00:01.0 ISA bridge: NVIDIA Corporation MCP67 ISA Bridge (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 00:01.1 SMBus: NVIDIA Corporation MCP67 SMBus (rev a2) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: 66MHz, fast devsel, IRQ 11 I/O ports at fc00 [size=64] I/O ports at 1c00 [size=64] I/O ports at 1c40 [size=64] Capabilities: [44] Power Management version 2 Kernel driver in use: nForce2_smbus Kernel modules: i2c-nforce2 00:01.2 RAM memory: NVIDIA Corporation MCP67 Memory Controller (rev a2) Flags: 66MHz, fast devsel 00:02.0 USB controller: NVIDIA Corporation MCP67 OHCI USB 1.1 Controller (rev a2) (prog-if 10 [OHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 Memory at fe02f000 (32-bit, non-prefetchable) [size=4K] Capabilities: [44] Power Management version 2 Kernel driver in use: ohci_hcd 00:02.1 USB controller: NVIDIA Corporation MCP67 EHCI USB 2.0 Controller (rev a2) (prog-if 20 [EHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22 Memory at fe02e000 (32-bit, non-prefetchable) [size=256] Capabilities: [44] Debug port: BAR=1 offset=0098 Capabilities: [80] Power Management version 2 Kernel driver in use: ehci_hcd 00:04.0 USB controller: NVIDIA Corporation MCP67 OHCI USB 1.1 Controller (rev a2) (prog-if 10 [OHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 21 Memory at fe02d000 (32-bit, non-prefetchable) [size=4K] Capabilities: [44] Power Management version 2 Kernel driver in use: ohci_hcd 00:04.1 USB controller: NVIDIA Corporation MCP67 EHCI USB 2.0 Controller (rev a2) (prog-if 20 [EHCI]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 20 Memory at fe02c000 (32-bit, non-prefetchable) [size=256] Capabilities: [44] Debug port: BAR=1 offset=0098 Capabilities: [80] Power Management version 2 Kernel driver in use: ehci_hcd 00:06.0 IDE interface: NVIDIA Corporation MCP67 IDE Controller (rev a1) (prog-if 8a [Master SecP PriP]) Subsystem: Biostar Microtech Int'l Corp Device 3409 Flags: bus master, 66MHz, fast devsel, latency 0 [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8] [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1] [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8] [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1] I/O ports at f000 [size=16] Capabilities: [44] Power Management version 2 Kernel driver in use: pata_amd Kernel modules: pata_amd 00:07.0 Audio device: NVIDIA Corporation MCP67 High Definition Audio (rev a1) Subsystem: Biostar Microtech Int'l Corp Device 820c Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22 Memory at fe024000 (32-bit, non-prefetchable) [size=16K] Capabilities: [44] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+ Capabilities: [6c] HyperTransport: MSI Mapping Enable- Fixed+ Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:08.0 PCI bridge: NVIDIA Corporation MCP67 PCI Bridge (rev a2) (prog-if 01 [Subtractive decode]) Flags: bus master, 66MHz, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=32 I/O behind bridge: 0000c000-0000cfff Memory behind bridge: fdf00000-fdffffff Prefetchable memory behind bridge: fd000000-fd0fffff Capabilities: [b8] Subsystem: NVIDIA Corporation Device cb84 Capabilities: [8c] HyperTransport: MSI Mapping Enable- Fixed- 00:09.0 IDE interface: NVIDIA Corporation MCP67 AHCI Controller (rev a2) (prog-if 85 [Master SecO PriO]) Subsystem: Biostar Microtech Int'l Corp Device 5407 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 I/O ports at 09f0 [size=8] I/O ports at 0bf0 [size=4] I/O ports at 0970 [size=8] I/O ports at 0b70 [size=4] I/O ports at dc00 [size=16] Memory at fe02a000 (32-bit, non-prefetchable) [size=8K] Capabilities: [44] Power Management version 2 Capabilities: [8c] SATA HBA v1.0 Capabilities: [b0] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [cc] HyperTransport: MSI Mapping Enable- Fixed+ Kernel driver in use: ahci 00:0b.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 I/O behind bridge: 0000b000-0000bfff Memory behind bridge: fde00000-fdefffff Prefetchable memory behind bridge: 00000000fdd00000-00000000fddfffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0c.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=03, subordinate=03, sec-latency=0 I/O behind bridge: 0000a000-0000afff Memory behind bridge: fdc00000-fdcfffff Prefetchable memory behind bridge: 00000000fdb00000-00000000fdbfffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0d.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=04, subordinate=04, sec-latency=0 I/O behind bridge: 00009000-00009fff Memory behind bridge: fda00000-fdafffff Prefetchable memory behind bridge: 00000000fd900000-00000000fd9fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0e.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=05, subordinate=05, sec-latency=0 I/O behind bridge: 00008000-00008fff Memory behind bridge: fd800000-fd8fffff Prefetchable memory behind bridge: 00000000fd700000-00000000fd7fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:0f.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=06, subordinate=06, sec-latency=0 I/O behind bridge: 00007000-00007fff Memory behind bridge: fd600000-fd6fffff Prefetchable memory behind bridge: 00000000fd500000-00000000fd5fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:10.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=07, subordinate=07, sec-latency=0 I/O behind bridge: 00006000-00006fff Memory behind bridge: fd400000-fd4fffff Prefetchable memory behind bridge: 00000000fd300000-00000000fd3fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:11.0 PCI bridge: NVIDIA Corporation MCP67 PCI Express Bridge (rev a2) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=08, subordinate=08, sec-latency=0 I/O behind bridge: 00005000-00005fff Memory behind bridge: fd200000-fd2fffff Prefetchable memory behind bridge: 00000000fd100000-00000000fd1fffff Capabilities: [40] Subsystem: NVIDIA Corporation Device 0000 Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] HyperTransport: MSI Mapping Enable- Fixed- Capabilities: [80] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel Kernel driver in use: pcieport Kernel modules: shpchp 00:12.0 VGA compatible controller: NVIDIA Corporation C68 [GeForce 7050 PV / nForce 630a] (rev a2) (prog-if 00 [VGA controller]) Subsystem: Biostar Microtech Int'l Corp Device 1406 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 21 Memory at fb000000 (32-bit, non-prefetchable) [size=16M] Memory at e0000000 (64-bit, prefetchable) [size=256M] Memory at fc000000 (64-bit, non-prefetchable) [size=16M] [virtual] Expansion ROM at 80000000 [disabled] [size=128K] Capabilities: [48] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Kernel driver in use: nvidia Kernel modules: nvidia_current, nouveau, nvidiafb 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration Flags: fast devsel Capabilities: [80] HyperTransport: Host or Secondary Interface 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map Flags: fast devsel 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller Flags: fast devsel 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control Flags: fast devsel Capabilities: [f0] Secure device <?> Kernel driver in use: k8temp Kernel modules: k8temp 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) Subsystem: Biostar Microtech Int'l Corp Device 2305 Flags: bus master, fast devsel, latency 0, IRQ 47 I/O ports at ac00 [size=256] Memory at fdcff000 (64-bit, non-prefetchable) [size=4K] [virtual] Expansion ROM at fdb00000 [disabled] [size=128K] Capabilities: [40] Power Management version 2 Capabilities: [48] Vital Product Data Capabilities: [50] MSI: Enable+ Count=1/2 Maskable- 64bit+ Capabilities: [60] Express Endpoint, MSI 00 Capabilities: [84] Vendor Specific Information: Len=4c <?> Capabilities: [100] Advanced Error Reporting Capabilities: [12c] Virtual Channel Capabilities: [148] Device Serial Number 32-00-00-00-10-ec-81-68 Capabilities: [154] Power Budgeting <?> Kernel driver in use: r8169 Kernel modules: r8169 sudo rfkill list all 2: phy2: Wireless LAN Soft blocked: no Hard blocked: no Would appreciate insight on how to chase this down.

    Read the article

  • How do I make sure the web developer I hire will not steal my idea?

    - by Greg McNulty
    So I have a great idea for a new website. However, not the time to develop it. I would like to hire a person or company to design it for me. What steps do I need to take, to protect my idea? Where and how do people protect website ideas in general? Also, how easy is it for someone to tweak the idea and make it legally heir own? Is a patent enough to protect such a thing, idea. Are there different levels or types of protection? Thank You.

    Read the article

  • Book: Pro SQL Server 2008 Service Broker: Klaus Aschenbrenner

    - by Greg Low
    I've met Klaus a number of times now and attended a few of his sessions at conferences. Klaus is doing a great job of evangelising Service Broker. I wish the SQL Server team would give it as much love. Service Broker is a wonderful technology, let down by poor resourcing. Microsoft did an excellent job of building the plumbing for this product in SQL Server 2005 but then provided no management tools and no prescriptive guidance. Everyone then seemed surprized that the takeup of it was slow. I even...(read more)

    Read the article

  • Adding a Network Loopback Adapter to Windows 8

    - by Greg Low
    I have to say that I continue to be frustrated with finding out how to do things in Windows 8. Here's another one and it's recorded so it might help someone else. I've also documented what I tried so that if anyone from the product group ever reads this, they'll understand how I searched for it and might try to make it easier.I wanted to add a network loopback adapter, to have a fixed IP address to work with when using an "internal" network with Hyper-V. (The fact that I even need to do this is also painful. I don't know why Hyper-V can't make it easy to work with host system folders, etc. as easily as I can with VirtualPC, VirtualBox, etc. but that's a topic for another day).In the end, what I needed was a known IP address on the same network that my guest OS was using, via the internal network (which allows connectivity from the host OS to/from guest OS's).I started by looking in the network adapters areas but there is no "add" functionality there. Realising that this was likely to be another unexpected challenge, I resorted to searching for info on doing this. I found KB article 2777200 entitled "Installing the Microsoft Loopback Adapter in Windows 8 and Windows Server 2012". Aha, I thought that's what I'd need. It describes the symptom as "You are trying to install the Microsoft Loopback Adapter, but are unable to find it." and that certainly sounded like me. There's a certain irony in documenting that something's hard to find instead of making it easier to find. Anyway, you'd hope that in that article, they'd then provide a step by step example of how to do it, but what they supply is this: The Microsoft Loopback Adapter was renamed in Windows 8 and Windows Server 2012. The new name is "Microsoft KM-TEST Loopback Adapter". When using the Add Hardware Wizard to manually add a network adapter, choose Manufacturer "Microsoft" and choose network adapter "Microsoft KM-TEST Loopback Adapter".The trick with this of course is finding the "Add Hardware Wizard". In Control Panel -> Hardware and Sound, there are options to "Add a device" and for "Device Manager". I tried the "Add a device" wizard (seemed logical to me) but after that wizard tries it's best, it just tells you that there isn't any hardware that it thinks it needs to install. It offers a link for when you can't find what you're looking for, but that leads to a generic help page that tells you how to do things like turning on your printer.In Device Manager, I checked the options in the program menus, and nothing useful was present. I even tried right-clicking "Network adapters", hoping that would lead to an option to add one, also to no avail.So back to the search engine I went, to try to find out where the "Add Hardware Wizard" is. Turns out I was in the right place in Device Manager, but I needed to right-click the computer's name, and choose "Add Legacy Hardware". No doubt that hasn't changed location lately but it's a while since I needed to add one so I'd forgotten. Regardless, I'm left wondering why it couldn't be in the menu as well.Anyway, for a step by step list, you need to do the following:1. From Control Panel, select "Device Manager" under the "Devices and Printers" section of the "Hardware and Sound" tab.2. Right-click the name of the computer at the top of the tree, and choose "Add Legacy Hardware".3. In the "Welcome to the Add Hardware Wizard" window, click Next.4. In the "The wizard can help you install other hardware" window, choose "Install the hardware that I manually select from a list" option and click Next.5. In the "The wizard did not find any new hardware on your computer" window, click Next.6. In the "From the list below, select the type of hardware you are installing" window, select "Network Adapters" from the list, and click Next.7. In the "Select Network Adapter" window, from the Manufacturer list, choose Microsoft, then in the Network Adapter window, choose "Microsoft KM-TEST Loopback Adapter", then click Next.8. In the "The wizard is ready to install your hardware" window, click Next.9. In the "Completing the Add Hardware Wizard" window, click Finish.Then you need to continue to set the IP address, etc.10. Back in Control Panel, select the "Network and Internet" tab, click "View Network Status and Tasks".11. In the "View your basic network information and set up connections" window, click "Change adapter settings".12. Right-click the new adapter that has been added (find it in the list by checking the device name of "Microsoft KM-TEST Loopback Adapter"), and click Properties.   

    Read the article

  • 3rd Party Tools: dbForge Studio for SQL Server

    - by Greg Low
    I've been taking a look at some of the 3rd party tools for SQL Server. Today, I looked at DBForge Studio for SQL Server from the team at DevArt. Installation was smooth. I did find it odd that it defaults to SQL authentication, not to Windows but either works fine. I like the way they have followed the SQL Server Management Studio visual layout. That will make the product familiar to existing SQL Server Management Studio users. I was keen to see what the database diagram tools are like. I found that the layouts generated where quite good, and certainly superior to the built-in SQL Server ones in SSMS. I didn't find any easy way to just add all tables to the diagram though. (That might just be me). One thing I did like was that it doesn't get confused when you have role playing dimensions. Multiple foreign key relationships between two tables display sensibly, unlike with the standard SQL Server version. It was pleasing to see a printing option in the diagramming tool. I found the database comparison tool worked quite well. There are a few UI things that surprised me (like when you add a new connection to a database, it doesn't select the one you just added by default) but generally it just worked as advertised, and the code that was generated looked ok. I used the SQL query editor and found the code formatting to be quite fast and while I didn't mind the style that it used by default, it wasn't obvious to me how to change the format. In Tools/Options I found things that talked about Profiles but I wasn't sure if that's what I needed. The help file pointed me in the right direction and I created a new profile. It's a bit odd that when you create a new profile, that it doesn't put you straight into editing the profile. At first I didn't know what I'd done. But as soon as I chose to edit it, I found that a very good range of options were available. When entering SQL code, the code completion options are quick but even though they are quite complete, one of the real challenges is in making them useful. Note in the following that while the options shown are correct, none are actually helpful: The Query Profiler seemed to work quite well. I keep wondering when the version supplied with SQL Server will ever have options like finding the most expensive operators, etc. Now that it's deprecated, perhaps never but it's great to see the third party options like this one and like SQL Sentry's Plan Explorer having this functionality. I didn't do much with the reporting options as I use SQL Server Reporting Services. Overall, I was quite impressed with this product and given they have a free trial available, I think it's worth your time taking a look at it.

    Read the article

  • My Codemash 2011 Retrospective

    - by Greg Malcolm
    I just got back from Codemash yesterday, and still on an adrenaline buzz. Here's my take on this years encounter: The Awesome Nearly everybody in one place Codemash is the ultimate place to catch up with community friends. This is my 3rd year visiting and I've got to know a great number of very cool people through various conferences, Give Camps and other community events. I'm finding more and more that Codemash is the best place to catch up with everybody regardless of technology interest or location. Of course I always make a whole bunch more friends while I'm there! Yay! Open Spaced I found the open spaces didn't work so well last year. This year things went a lot smoother and the topics were engaging and fresh. While I miss Alan Steven's approach of running it like an agile project, it was very cool to see that it evolving. Laptops were often cracked open, not just once but frequently! For example: Jasmine - Paired on a javascript kata using the Jasmine javascript test runner J - Sat in on a J demo from local J enthusiast, Tracy Harms Watir - More pairing, this time using Ruby with the watir-webdriver through cucumber. I'd mostly forgotten that Cucumber runs just fine without Rails. It made a change to do without. The other spaces were engaging too, but I think that's enough for that topic. Javascript Shenanigans I've already mentioned that I attended a Jasmine kata session. Jasmine is close to my heart right now every since I discovered it while on the hunt for a decent Javascript testing framework for a javascript koans project earlier this year. Well, it also got covered in the Java Precompiler and Pillar's vendor session, which was great to see. Node.js was also a reoccurring theme. Node.js in a nutshell? It's an extremely scalable Event based I/O server which runs on Javascript. I'd already encountered through a Startup Weekend project and have been noticing increasing interest of late. After encountering more node.js driven excitement from my peers at codemash I absolutely had to attend the open space on it. At least 20 people turned up and by the end we had some answers, a whole ton of new questions and an impromptu user group in the form of a twitter channel (#nodemash). I have no idea where this is going to go or how big it is going to become, but if it can cross the chasm into the enterprise it could become huge... Scala Koans I'm a bit of a Koans addict, and I really need more exposure to functional languages so I gave the Scala Koans precompiler a try. Great fun! I'm really glad I attended because I found I had a whole ton of questions. Currently the koans are available here, and the answers are here. Opportunities While we're on the subject can we change the subject now? Hai Gregory, You really need to keep the drinking for later in the day. I mean seriously, you're 34 and you still do this every single time! Sure, you made it to Chad Fowler keynote ok, but you looking a rather pale weren't you? Also might have been nice to attend 'Netflicks in the Cloud' instead of 'Sleeping It Off For People Who Should Know Better'. Kthxbye PS: Stop talking to yourself Not that I entirely regret it, I've had some of my greatest insights through late night drunken conversations at the CodeMash bar. Just might be nice to reign it in a little and get something out of the next morning too. Diversity This is something that is in the back of my mind because of conversations at Codemash as well as throughout the year; I'm realizing more and more how discouraging the IT profession is for women. I notice in the community there has been a lot of attention paid to stamping out harrasment, which is good, but there also seems to be a massive PR issue. I really don't have any solutions, but I figure it can't hurt to pay more attention to whats going on... And in Other News I now have a picture of Chad Fowler giving me more cowbell! Sadly I managed to lose the cowbell later on. Hopefully it's gone to a Better Place. The Womack Family Band joined in with the musicians jam this year. There's my cowbell again! Why must you hide from me? I also finally went in the water for the first time in all the I've been coming to codemash. Why did I wait so long?!?

    Read the article

  • MP4 files show up but won't stream to xbox 360

    - by Greg
    I set up a basic media server to stream to my 360 using uShare. Here are the instructions I used: http://linuxexpresso.wordpress.com/2011/01/02/howto-ubuntu-upnp-server-to-xbox-360/. I can stream avi files fine but I cannot stream mp4s. When I go to videos on the xbox, I can see all of the videos and folders but when I click play for an mp4 nothing happens. On my ubuntu desktop I can click on the mp4 file and it plays fine. And if I take that file, stick it on a thumb drive and plug it directly into the xbox the mp4 will play off the thumb drive. I'm lost for why it won't work through ushare. Any ideas?

    Read the article

  • SQL Server 2008 R2: StreamInsight changes at RTM: Access to grouping keys via explicit typing

    - by Greg Low
    One of the problems that existed in the CTP3 edition of StreamInsight was an error that occurred if you tried to access the grouping key from within your projection expression. That was a real issue as you always need access to the key. It's a bit like using a GROUP BY in TSQL and then not including the columns you're grouping by in the SELECT clause. You'd see the results but not be able to know which results are which. Look at the following code: var laneSpeeds = from e in vehicleSpeeds group e...(read more)

    Read the article

  • The need for user-defined index types

    - by Greg Low
    Since the removal of the 8KB limit on serialization, the ability to define new data types using SQL CLR integration is now almost at a usable level, apart from one key omission: indexes. We have no ability to create our own types of index to support our data types. As a good example of this, consider that when Microsoft introduced the geometry and geography (spatial) data types, they did so as system CLR data types but also needed to introduce a spatial index as a new type of index. Those of us that...(read more)

    Read the article

  • Using lookahead assertions in regular expressions

    - by Greg Jackson
    I use regular expressions on a daily basis, as my daily work is 90% in Perl (legacy codebase, but that's a different issue). Despite this, I still find lookahead and lookbehind to be terribly confusing and often unreadable. Right now, if I were to get a code review with a lookahead or lookbehind, I would immediately send it back to see if the problem can be solved by using multiple regular expressions or a different approach. The following are the main reasons I tend not to like them: They can be terribly unreadable. Lookahead assertions, for example, start from the beginning of the string no matter where they are placed. That, among other things, can cause some very "interesting" and non-obvious behaviors. It used to be the case that many languages didn't support lookahead/lookbehind (or supported them as "experimental features"). This isn't the case quite as much, but there's still always the question as to how well it's supported. Quite frankly, they feel like a dirty hack. Regexps often already are, but they can also be quite elegant, and have gained widespread acceptance. I've gotten by without any need for them at all... sometimes I think that they're extraneous. Now, I'll freely admit that especially the last two reasons aren't really good ones, but I felt that I should enumerate what goes through my mind when I see one. I'm more than willing to change my mind about them, but I feel that they violate some of my core tenets of programming, including: Code should be as readable as possible without sacrificing functionality -- this may include doing something in a less efficient, but clearer was as long as the difference is negligible or unimportant to the application as a whole. Code should be maintainable -- if another programmer comes along to fix my code, non-obvious behavior can hide bugs or make functional code appear buggy (see readability) "The right tool for the right job" -- I'm sure you can come up with contrived examples that could use lookahead, but I've never come across something that really needs them in my real-world development work. Is there anything that they're really the best tool for, as opposed to, say, multiple regexps (or, alternatively, are they the best tool for most cases they're used for today). My question is this: Is it good practice to use lookahead/lookbehind in regular expressions, or are they simply a hack that have found their way into modern production code? I'd be perfectly happy to be convinced that I'm wrong about this, and simple examples are useful for examples or illustration, but by themselves, won't be enough to convince me.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >