Search Results

Search found 119 results on 5 pages for 'barriers'.

Page 4/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • Running WAMP (XAMPP) and LAMP from One SSD, On 64-bit Windows and Linux Machines

    - by nicorellius
    I have an solid state drive that I develop websites on. The reason I do this is because I work on a few different computers. Historically, I created separate developing environments to use for each machine. This was OK, but if the system changed for some reason, eg, new OS install, it was a pain. So I bought a USB 3.0 enclosure and put a solid state drive in there and it's pretty darn fast, which is good. I was working with three Windows machines and I could simply hook up the drive, launch my XAMPP server and away I went, developing websites: using Dreamweaver, Komodo, Notepad++, Eclipse, etc. Recently, however, one of my Windows machines' hard drive went down and instead of going back to Windows in this case, I went with Ububntu 12.04. I have several Ubuntu workstations and servers and I like Linux, so I thought his was a great opportunity to transition. I went to work installing and trying to set up a LAMP server and, besides from XAMPP 64-bit compatibility out of the box, I'm seeing other issues with getting this Linux server running. I will keep trying to resolve this, but in the meantime... my question is, has anyone ever successfully run both WAMP and LAMP from the same SSD (formatted to NTFS)? I'm sure there are lots of barriers to this happening, like local file system, OS libraries, dependencies, etc. But I was thinking it would be cool if it could be done. I'm no expert, so if this is just plain old stupid, please don't hesitate to let me know.

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • Image Management System with Different Levels of Access

    - by Jason
    I work in the graphics department for a real estate brokerage, and we deal with a lot of photos. Agents take the photos, upload them to me, I touch up and standardize the photos, then I add them to an in-house server for future use by the graphics dept. I'd like to make the "sanitized" photo files available to the agents to use when they want, but I don't want the agents poking around the graphics department's files (things get misplaced, renamed and messed up in a hurry). What would be perfect is if we could create a read-only "mirror" (correct term?) of that server that could be accessed by the agents as needed, but which wouldn't feed back into our "sanitized" file system. Edit: I'm looking for an automatic solution... manually posting the files to two separate locations seems like an inelegant solution when working digitally. Edit: I'm trying to avoid any access barriers to the public (dirty) file system (however it's finally implemented). There are 40-50 real estate agents who need to access these files, half of whom can't reliably download an email attachment.

    Read the article

  • The New Social Developer Community: a Q&A

    - by Mike Stiles
    In our last blog, we introduced the opportunities that lie ahead for social developers as social applications reach across every aspect and function of the enterprise. Leading the upcoming JavaOne Social Developer Program October 2 at the San Francisco Hilton is Roland Smart, VP of Social Marketing at Oracle. I got to ask Roland a few of the questions an existing or budding social developer might want to know as social extends beyond interacting with friends and marketing and into the enterprise. Why is it smart for developers to specialize as social developers? What opportunities lie in the immediate future that’s making this a critical, in-demand position? Social has changed the way we interact with brands and with each other across the web. As we acclimate to a new social paradigm we also look to extend its benefits into new areas of our lives. The workplace is a logical next step, and we're starting to see social interactions more and more in this context. But unlocking the value of social interactions requires technical expertise and knowledge of developing social apps that tap into the social graph. Developers focused on integrating social experiences into enterprise applications must be familiar with popular social APIs and must understand how to build enterprise social graphs of their own. These developers are part of an emerging community of social developers and are key to socially enabling the enterprise. Facebook rebranded their Preferred Developer Consultant Group (PDC) and the Preferred Marketing Developers (PMD) to underscore the fact developers are required inside marketing organizations to unlock the full potential of their platform. While this trend is starting on the marketing side with marketing developers, this is just an extension of the social developer concept that will ultimately drive social across the enterprise. What are some of the various ways social will be making its way into every area of enterprise organizations? How will it be utilized and what kinds of applications are going to be needed to facilitate and maximize these changes? Check out Oracle’s vision for the social-enabled enterprise. It’s a high-level overview of how social will impact across the enterprise. For example: HR can leverage social in recruiting and retentionSales can leverage social as a prospecting toolMarketing can use social to gain market insightCustomer support can use social to leverage community support to improve customer satisfaction while reducing service costOperations can leverage social improve systems That’s only the beginning. Once sleeves get rolled up and social developers and innovators get to work, still more social functions will no doubt emerge. What makes Java one of, if not the most viable platform on which to build these new enterprise social applications? Java is certainly one of the best platforms on which to build social experiences because there’s such a large existing community of Java developers. This means you can affordably recruit talent, and it's possible to effectively solicit advice from the community through various means, including our new Social Developer Community. Beyond that, there are already some great proof points Java is the best platform for creating social experiences at scale. Consider LinkedIn and Twitter. Tell us more about the benefits of collaboration and more about what the Oracle Social Developer Community is. What opportunities does that offer up and what are some of the ways developers can actively participate in and benefit from that community? Much has been written about the overall benefits of collaborating with other developers. Those include an opportunity to introduce yourself to the community of social developers, foster a reputation, establish an expertise, contribute to the advancement of the space, get feedback, experiment with the latest concepts, and gain inspiration. In short, collaboration is a tool that must be applied properly within a framework to get the most value out of it. The OSDC is a place where social developers can congregate to discuss the opportunities/challenges of building social integrations into their applications. What “needs” will this community have? We don't know yet. But we wanted to create a forum where we can engage and understand what social developers are thinking about, excited about, struggling with, etc. The OSDL can then step in if we can help remove barriers and add value in a serious and committed way so Oracle can help drive practice development.

    Read the article

  • Code is not the best way to draw

    - by Bertrand Le Roy
    It should be quite obvious: drawing requires constant visual feedback. Why is it then that we still draw with code in so many situations? Of course it’s because the low-level APIs always come first, and design tools are built after and on top of those. Existing design tools also don’t typically include complex UI elements such as buttons. When we launched our Touch Display module for Netduino Go!, we naturally built APIs that made it easy to draw on the screen from code, but very soon, we felt the limitations and tedium of drawing in code. In particular, any modification requires a modification of the code, followed by compilation and deployment. When trying to set-up buttons at pixel precision, the process is not optimal. On the other hand, code is irreplaceable as a way to automate repetitive tasks. While tools like Illustrator have ways to repeat graphical elements, they do so in a way that is a little alien and counter-intuitive to my developer mind. From these reflections, I knew that I wanted a design tool that would be structurally code-centric but that would still enable immediate feedback and mouse adjustments. While thinking about the best way to achieve this goal, I saw this fantastic video by Bret Victor: The key to the magic in all these demos is permanent execution of the code being edited. Whenever a parameter is being modified, everything is re-executed immediately so that the impact of the modification is instantaneously visible. If you do this all the time, the code and the result of its execution fuse in the mind of the user into dual representations of a single object. All mental barriers disappear. It’s like magic. The tool I built, Nutshell, is just another implementation of this principle. It manipulates a list of graphical operations on the screen. Each operation has a nice editor, and translates into a bit of code. Any modification to the parameters of the operation will modify the bit of generated code and trigger a re-execution of the whole program. This happens so fast that it feels like the drawing reacts instantaneously to all changes. The order of the operations is also the order in which the code gets executed. So if you want to bring objects to the front, move them down in the list, and up if you want to move them to the back: But where it gets really fun is when you start applying code constructs such as loops to the design tool. The elements that you put inside of a loop can use the loop counter in expressions, enabling crazy scenarios while retaining the real-time edition features. When you’re done building, you can just deploy the code to the device and see it run in its native environment: This works thanks to two code generators. The first code generator is building JavaScript that is executed in the browser to build the canvas view in the web page hosting the tool. The second code generator is building the C# code that will run on the Netduino Go! microcontroller and that will drive the display module. The possibilities are fascinating, even if you don’t care about driving small touch screens from microcontrollers: it is now possible, within a reasonable budget, to build specialized design tools for very vertical applications. Direct feedback is a powerful ally in many domains. Code generation driven by visual designers has become more approachable than ever thanks to extraordinary JavaScript libraries and to the powerful development platform that modern browsers provide. I encourage you to tinker with Nutshell and let it open your eyes to new possibilities that you may not have considered before. It’s open source. And of course, my company, Nwazet, can help you develop your own custom browser-based direct feedback design tools. This is real visual programming…

    Read the article

  • Securing Flexfield Value Sets in EBS 12.2

    - by Sara Woodhull
    Release 12.2 includes a new feature: flexfield value set security. This new feature gives you additional options for ensuring that different administrators have non-overlapping responsibilities, which in turn provides checks and balances for sensitive activities.  Separation of Duties (SoD) is one of the key concepts of internal controls and is a requirement for many regulations including: Sarbanes-Oxley (SOX) Act Health Insurance Portability and Accountability Act (HIPAA) European Union Data Protection Directive. Its primary intent is to put barriers in place to prevent fraud or theft by an individual acting alone. Implementing Separation of Duties requires minimizing the possibility that users could modify data across application functions where the users should not normally have access. For flexfields and report parameters in Oracle E-Business Suite, values in value sets can affect functionality such as the rollup of accounting data, job grades used at a company, and so on. Controlling access to the creation or modification of value set values can be an important piece of implementing Separation of Duties in an organization. New Flexfield Value Set Security feature Flexfield value set security allows system administrators to restrict users from viewing, adding or updating values in specific value sets. Value set security enables role-based separation of duties for key flexfields, descriptive flexfields, and report parameters. For example, you can set up value set security such that certain users can view or insert values for any value set used by the Accounting Flexfield but no other value sets, while other users can view and update values for value sets used for any flexfields in Oracle HRMS. You can also segregate access by Operating Unit as well as by role or responsibility.Value set security uses a combination of data security and role-based access control in Oracle User Management. Flexfield value set security provides a level of security that is different from the previously-existing and similarly-named features in Oracle E-Business Suite: Function security controls whether a user has access to a specific page or form, as well as what operations the user can do in that screen. Flexfield value security controls what values a user can enter into a flexfield segment or report parameter (by responsibility) during routine data entry in many transaction screens across Oracle E-Business Suite. Flexfield value set security (this feature, new in Release 12.2) controls who can view, insert, or update values for a particular value set (by flexfield, report, or value set) in the Segment Values form (FNDFFMSV). The effect of flexfield value set security is that a user of the Segment Values form will only be able to view those value sets for which the user has been granted access. Further, the user will be able to insert or update/disable values in that value set if the user has been granted privileges to do so.  Flexfield value set security affects independent, dependent, and certain table-validated value sets for flexfields and report parameters. Initial State of the Feature upon Upgrade Because this is a new security feature, it is turned on by default.  When you initially install or upgrade to Release 12.2.2, no users are allowed to view, insert or update any value set values (users may even think that their values are missing or invalid because they cannot see the values).  You must explicitly set up access for specific users by enabling appropriate grants and roles for those users.We recommend using flexfield value set security as part of a comprehensive Separation of Duties strategy. However, if you choose not to implement flexfield value set security upon upgrading to or installing Release 12.2, you can enable backwards compatibility--users can access any value sets if they have access to the Values form--after you upgrade. The feature does not affect day-to-day transactions that use flexfields.  However, you must either set up specific grants and roles or enable backwards compatibility before users can create new values or update or disable existing values. For more information, see: Release 12.2 Flexfield Value Set Security Documentation Update for Patch 17305947:R12.FND.C (Document 1589204.1) R12.2 TOI: Implement and Use Application Object Library (AOL) - Flexfields Security and Separation of Duties for Value Sets (recorded training)

    Read the article

  • 2013 Predictions for Retail

    - by David Dorf
    Its that time of year to roll out the predictions for next year.  I can't say I've really nailed it in the past, but feel free to look back at my 2012, 2011, and 2010 predictions.  I'm not expecting anything earth-shattering this year; just continued maturation of several technologies that are finally taking hold. 1. Next day delivery -- Amazon finally decided it wasn't worth fighting state taxes and instead decided to place distribution centers everywhere so they can potentially offer next-day deliveries.  Not to be outdone, Walmart is looking to leverage its huge physical presence to offer the same.  Clubs like ShopRunner are pushing delivery barriers as well, so the norm is shifting to free shipping in a few days or relatively cheap shipping overnight.  Retailers need be thinking about how to ship from physical stores. 2. Bring your own device -- Earlier this year Intuit bought AisleBuyer, a mobile self-checkout start-up, at least somewhat validating the BYOD approach.  Grocery stores, especially in Europe, have been supporting in-aisle self-scanning for a while and I'm betting it will find a home in certain verticals in the US too.  There's also the BYOD concept for employees.  Some retailers are considering issuing mobile devices at hiring along side the shirt and name-tag.  Employees become responsible for the hardware until they leave. 3. TV shopping -- Will Apple finally release a TV product in 2013?  Who knows?  But the industry isn't standing still. Companies like QVC and HSN are already successfully combining the TV and online experiences for shopping.  Comcast is partnering with Tivo to allow viewers to interact with ads with Paypal handing payment.  This will be a slow maturation, but expect TVs to get smarter and eventually become a new selling channel (pun intended) for retailers. 4. Privacy backlash -- It only takes one big incident to stir the public, and I'm betting we have one in 2013.  Facebook, Google, or Apple will test the boundaries of what the public is willing to accept.  It could involve a retailer using geo-location technology, or possibly video analytics.  And as is always the case, the offender will apologize, temporarily remove the technology, and wait 2-3 years for it to be generally accepted.  Privacy is a moving target. 5. More NFC -- I've come to the conclusion that adoption of any banking technology is going to be slow.  It was slow for credit cards, ATMs, and online billpay so why should it be any different for NFC?  Maybe, just maybe the iPhone 5S will have an NFC chip, but we're not going to see mainstream uptake for years.  Next year we'll continue to see incremental improvements from Isis, Google, and Paypal and a plethora of new startups, but don't toss your magstripe cards just yet. 6. In-store location -- The technologies for tracking people inside stores is really improving.  Retailers can track people using video cameras, infrared, and by the WiFi radios in mobile phones.  We're getting closer to the point where accuracy could be a shelf-facing, which will help retailers understand how people shop, where they spend time, and what displays attract them.  Expect CPG companies to get involved and partner with retailers, since the data benefits both parties.  Consumers will benefit by being directed right to the products they seek.  (In 2013 ARTS is forming a workteam to develop new standards in this area.) 7. M&A -- Looking back at 2012 there were some really big deals involving IBM, Oracle, JDA, and NCR and I expect that trend will likely continue as vendors add assets to bolster their portfolios.  Many retailers are due for an IT transformation to support anywhere, anytime shoppers, and one-stop-vendors can minimize complexity and costs. Predictions from other sources: Independent Retailer Stores Magazine IDC Insights Mobile Commerce Daily

    Read the article

  • Comparing Isis, Google, and Paypal

    - by David Dorf
    Back in 2010 I was sure NFC would make great strides, but here we are two years later and NFC doesn't seem to be sticking. The obvious reason being the chicken-and-egg problem.  Retailers don't want to install the terminals until the phones support NFC, and vice-versa. So consumers continue to sit on the sidelines waiting for either side to blink and make the necessary investment.  In the meantime, EMV is looking for a way to sneak into the US with the help of the card brands. There are currently three major solutions that are battling in the marketplace.  All three know that replacing mag-stripe alone is not sufficient to move consumers.  Long-term it's the offers and loyalty programs combined with tendering that make NFC attractive. NFC solutions cross lots of barriers, so a strong partner system is required.  The solutions need to include the carriers, card brands, banks, handset manufacturers, POS terminals, and most of all lots of merchants.  Lots of coordination is necessary to make the solution seamless to the consumer. Google Wallet Google's problem has always been that only the Nexus phone has an NFC chip that supports their wallet.  There are a couple of additional phones out there now, but adoption is still slow.  They acquired Zavers a while back to incorporate digital coupons, but the the bulk of their users continue to be non-NFC.  They have taken an open approach by not specifying particular payment brands.  Google is piloting in San Francisco and New York, supporting both MasterCard PayPass and stored value. I suppose the other card brands may eventually follow.  There's no cost for consumers or merchants -- Google will make money via targeted ads. Isis Not long after Google announced its wallet, AT&T, Verizon, and T-Mobile announced a joint venture called Isis.  They are in the unique position of owning the SIM in the phones they issue.  At first it seemed Isis was a vehicle for the carriers to compete with the existing card brands, but Isis later switched to a generic wallet that supports the major card brands.  Isis reportedly charges issuers a $5 fee per customer per year.  Isis will pilot this summer in Salt Lake City and Austin. PayPal PayPal, the clear winner in the online payment space beyond traditional credit cards, is trying to move into physical stores.  After negotiations with Google to provide a wallet broke off, PayPal decided to avoid NFC altogether, at least for now, and focus on payments without any physical card or phone.  By avoiding NFC, consumers don't need an NFC-enabled phone and merchants don't need a new reader.  Consumers must enter their phone number and PIN in the merchant's existing device, or they can enter their PIN in the PayPal inStore app running on their phone, then show the merchant a unique barcode which authorizes payment. Paypal is free for consumers and charges a fee for merchants.  Its not clear, at least to me, how PayPal handles fraudulent transactions and whether the consumer is protected. The wildcard is, of course, Apple.  Their mobile technologies set the standard, so incorporating NFC chips would certainly accelerate adoption of many payment solutions.  Their announcement today of the iOS Passbook is a step in the right direction, but stops short of handling payments. For those retailers that have invested in modern terminals, it seems the best strategy is to support all the emerging solutions and let the consumers choose the winner.

    Read the article

  • Handling "related" work within a single agile work item

    - by Tesserex
    I'm on a project team of 4 devs, myself included. We've been having a long discussion on how to handle extra work that comes up in the course of a single work item. This extra work is usually things that are slightly related to the task, but not always necessary to accomplish the goal of the item (that may be an opinion). Examples include but are not limited to: refactoring of the code changed by the work item refactoring code neighboring the code changed by the item re-architecting the larger code area around the ticket. For example if an item has you changing a single function, you realize the entire class now could be redone to better accommodate this change. improving the UI on a form you just modified When this extra work is small we don't mind. The problem is when this extra work causes a substantial extension of the item beyond the original feature point estimation. Sometimes a 5 point item will actually take 13 points of time. In one case we had a 13 point item that in retrospect could have been 80 points or more. There are two options going around in our discussion for how to handle this. We can accept the extra work in the same work item, and write it off as a mis-estimation. Arguments for this have included: We plan for "padding" at the end of the sprint to account for this sort of thing. Always leave the code in better shape than you found it. Don't check in half-assed work. If we leave refactoring for later, it's hard to schedule and may never get done. You are in the best mental "context" to handle this work now, since you're waist deep in the code already. Better to get it out of the way now and be more efficient than to lose that context when you come back later. We draw a line for the current work item, and say that the extra work goes into a separate ticket. Arguments include: Having a separate ticket allows for a new estimation, so we aren't lying to ourselves about how many points things really are, or having to admit that all of our estimations are terrible. The sprint "padding" is meant for unexpected technical challenges that are direct barriers to completing the ticket requirements. It is not intended for side items that are just "nice-to-haves". If you want to schedule refactoring, just put it at the top of the backlog. There is no way for us to properly account for this stuff in an estimation, since it seems somewhat arbitrary when it comes up. A code reviewer might say "those UI controls (which you actually didn't modify in this work item) are a bit confusing, can you fix that too?" which is like an hour, but they might say "Well if this control now inherits from the same base class as the others, why don't you move all of this (hundreds of lines of) code into the base and rewire all this stuff, the cascading changes, etc.?" And that takes a week. It "contaminates the crime scene" by adding unrelated work into the ticket, making our original feature point estimates meaningless. In some cases, the extra work postpones a check-in, causing blocking between devs. Some of us are now saying that we should decide some cut off, like if the additional stuff is less than 2 FP, it goes in the same ticket, if it's more, make it a new ticket. Since we're only a few months into using Agile, what's the opinion of all the more seasoned Agile veterans around here on how to handle this?

    Read the article

  • Delivering SOA Governance with EAMS and Oracle Enterprise Repository by Link Consulting Team

    - by JuergenKress
    In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Link Consulting,OER,OSR,SOA Governance,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • The Healthy Tension That Mobility Creates

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps In my previous post, I talked about the value of the mobile revolution on businesses and workers. Now let me put on a different hat and view the world from the IT department and the IT leader’s viewpoint. The IT leader has different concerns – around privacy, potential liability of information leakage, and intellectual property protection. These concerns and the leader’s goals create a healthy tension with the users. For example, effective device management becomes a must have for the IT leader, especially if you look at the Android ecosystem as an example. There are benefits to the Android strategy, but there are also drawbacks, such as uniformity – in device management, in operating systems, and in the application taxonomy and capabilities. Whereas, if you compare Android to iOS, Apple's operating system, iOS is more unified, more streamlined, and easier to manage. In either case, this is where mobile device management in the cloud makes good sense. I don't think IT departments should be hosting device management and managing that complexity. It should be a cloud service and I predict it's going to be key for our customers. A New Focus for IT Departments So where does that leave the IT departments? I think their futures are in governance, which is a more strategic play than a tactical one. Device management is tactical and it's the “now” topic. But the mobile phenomenon, if you will, is going to drive significant change in terms of how IT plans, hosts, and deploys enterprise applications. For example, opening up enterprise applications for mobile users presents some challenges unless you deploy more complicated network topologies, such as virtual private networks and threat protection technology. If you really want employees to be mobile you need to remove those kinds of barriers. But I don’t think IT departments want to wrestle with exposing their private enterprise data centers and being responsible for hosted business applications – applications in a sense that they’re making vulnerable to the public world. This opens up a significant need and a significant driver for cloud applications. However, it's not just about taking away the complexity – it's also about taking away the responsibility. Why should every business have to carry the responsibility and figure out all the nuts and bolts of how to protect themselves in this public, mobile world? When you use apps in the cloud, either your vendor or your hosting partner should have figured all that out. They need to assure the business that they are adhering to all sorts of security and compliance regulations so users can be connected and have access to information anywhere anytime. More Ideas and Better Service What’s more interesting is the world of possibilities that the connected, cloud-based world enables. I believe that the one-size-fits-all, uber-best practices, lowest-common denominator-like capabilities will go away. IT will now be able to solve very specific business challenges for the different corporate functions it serves. In this new world, IT will play a key role in enabling different organizations within a company to be best in class and delivering greater value to the line of business managers. IT will actually help to differentiate. Net result is a more agile workforce and business because each department is getting work done its own way.

    Read the article

  • Introducing the Oracle Linux Playground yum repo

    - by wcoekaer
    We just introduced a new yum repository/channel on http://public-yum.oracle.com called the playground channel. What we started doing is the following: When a new stable mainline kernel is released by Linus or GregKH, we internally build RPMs to test it and do some QA work around it to keep track of what's going on with the latest development kernels. It helps us understand how performance moves up or down and if there are issues, we try to help look into them and of course send that stuff back upstream. Many Linux users out there are interested in trying out the latest features but there are some potential barriers to do this. (1) in general, you are looking at an upstream development distribution, which means that everything changes both in userspace(random applications) and kernel. Projects like Fedora are very useful and someone that wants to just see how the entire distribution evolves with all the changes, this is a great way to be current. A drawback here, though, is that if you have applications that are not part of the distribution, there's a lot of manual work involved or they might just not work because the changes are too drastic. The introduction of systemd is a good example. (2) when you look at many of our customers, that are interested in our database products or applications, the starting point of having a supported/certified userspace/distribution, like Oracle Linux, is a much easier way to get your feet wet in seeing what new/future Linux kernel enhancements could do. This is where the playground channel comes into play. When you install Oracle Linux 6 (which anyone can download and use from http://edelivery.oracle.com/linux), grab the latest public yum repository file http://public-yum.oracle.com/public-yum-ol6.repo, put it in /etc/yum.repos.d and enable the playground repo : [ol6_playground_latest] name=Latest mainline stable kernel for Oracle Linux 6 ($basearch) - Unsupported baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/playground/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 Now, all you need to do : type yum update and you will be downloading the latest stable kernel which will install cleanly on Oracle Linux 6. Thus you end up with a stable Linux distribution where you can install all your software, and then download the latest stable kernel (at time of writing this is 3.6.7) without having to recompile a kernel, without having to jump through hoops. There is of course a big, very important disclaimer this is NOT for PRODUCTION use. We want to try and help make it easy for people that are interested, from a user perspective, where the Linux kernel is going and make it easy to install and use it and play around with new features. Without having to learn how to compile a kernel and without necessarily having to install a complete new distribution with all the changes top to bottom. So we don't or won't introduce any new userspace changes, this project really is around making it easy to try out the latest upstream Linux kernels in a very easy way on an environment that's stable and you can keep current, since all the latest errata for Oracle Linux 6 are published on the public yum repo as well. So one repository location for all your current changes and the upstream kernels. We hope that this will get more users to try out the latest kernel and report their findings. We are always interested in understanding stability and performance characteristics. As new features are going into the mainline kernel, that could potentially be interesting or useful for various products, we will try to point them out on our blogs and give an example on how something can be used so you can try it out for yourselves. Anyway, I hope people will find this useful and that it will help increase interested in upstream development beyond reading lkml by some of the more non-kernel-developer types.

    Read the article

  • College Ratings via the Federal Government

    - by user9147039
    A few weeks back you might remember news about a higher education rating system proposal from the Obama administration. As I've discussed previously, political and stakeholder pressures to improve outcomes and increase transparency are stronger than ever before. The executive branch proposal is intended to make progress in this area. Quoting from the proposal itself, "The ratings will be based upon such measures as: Access, such as percentage of students receiving Pell grants; Affordability, such as average tuition, scholarships, and loan debt; and Outcomes, such as graduation and transfer rates, graduate earnings, and advanced degrees of college graduates.” This is going to be quite complex, to say the least. Most notably, higher ed is not monolithic. From community and other 2-year colleges, to small private 4-year, to professional schools, to large public research institutions…the many walks of higher ed life are, well, many. Designing a ratings system that doesn't wind up with lots of unintended consequences and collateral damage will be difficult. At best you would end up potentially tarnishing the reputation of certain institutions that were actually performing well against the metrics and outcome measures that make sense in their "context" of education. At worst you could spend a lot of time and resources designing a system that would lose credibility with its "customers". A lot of institutions I work with already have in place systems like the one described above. They are tracking completion rates, completion timeframes, transfers to other institutions, job placement, and salary information. As I talk to these institutions there are several constants worth noting: • Deciding on which metrics to measure is complicated. While employment and salary data are relatively easy to track, qualitative measures are more difficult. How do you quantify the benefit to someone who studies in one field that may not compensate him or her as well as another field but that provides huge personal fulfillment and reward is a difficult measure to quantify? • The data is available but the systems to transform the data into actual information that can be used in meaningful ways are not. Too often in higher ed information is siloed. As such, much of the data that need to be a part of a comprehensive system sit in multiple organizations, oftentimes outside the reach of core IT. • Politics and culture are big barriers. One of the areas that my team and I spend a lot of time talking about with higher ed institutions all over the world is the imperative to optimize for student success. This, like the tracking of the students’ achievement after graduation, requires a level or organizational capacity that does not currently exist. The primary barrier is the culture of "data islands" in higher ed, and the need for leadership to drive out the divisions between departments, schools, colleges, etc. and institute academy-wide analytics and data stewardship initiatives that will enable student success. • Data quality is a very big issue. So many disparate systems exist (some on premise, some "in the cloud") that keep data about "persons" using different means to identify them. Establishing a single source of truth about an individual and his or her data is difficult without some type of data quality policy and tools. Good tools actually exist but are seldom leveraged. Don't misunderstand - I think it's a great idea to drive additional transparency and accountability into the system of higher education. And not just at home, but globally. Students and parents need access to key data to make informed, responsible choices. The tools exist to not only enable this kind of information to be shared but to capture the very metrics stakeholders care most about and in a way that makes sense in the context of a given institution's "place" in the overall higher ed panoply.

    Read the article

  • Pinterest and the Rising Power of Imagery

    - by Mike Stiles
    If images keep you glued to a screen, you’re hardly alone. Countless social users are letting their eyes do the walking, waiting for that special photo to grab their attention. And perhaps more than any other social network, Pinterest has been giving those eyes plenty of room to walk. Pinterest came along in 2010. Its play was that users could simply create topic boards and pin pictures to the appropriate boards for sharing. Yes there are some words, captions mostly, but not many. The speed of its growth raised eyebrows. Traffic quadrupled in the last quarter of 2011, with 7.51 million unique visitors in December alone. It now gets 1.9 billion monthly page views. And it was sticky. In the US, the average time a user spends strolling through boards and photos on Pinterest is 15 minutes, 50 seconds. Proving the concept of browsing a catalogue is not dead, it became a top 5 referrer for several apparel retailers like Land’s End, Nordstrom, and Bergdorfs. Now a survey of online shoppers by BizRate Insights says that Pinterest is responsible for more purchases online than Facebook. Over 70% of its users are going there specifically to keep up with trends and get shopping ideas. And when they buy, the average order value is $179. Pinterest is also scoring better in terms of user engagement. 66% of pinners regularly follow and repin retailers, whereas 17% of Facebook fans turn to that platform for purchase ideas. (Facebook still wins when it comes to reach and driving traffic to 3rd-party sites by the way). Social posting best practices have consistently shown that posts with photos are rewarded with higher engagement levels. You may be downright Shakespearean in your writing, but what makes images in the digital world so much more powerful than prose? 1. They transcend language barriers. 2. They’re fun and addictive to look at. 3. They can be consumed in fractions of a second, important considering how fast users move through their social content (admit it, you do too). 4. They’re efficient gateways. A good picture might get them to the headline. A good headline might then get them to the written content. 5. The audience for them surpasses demographic limitations. 6. They can effectively communicate and trigger an emotion. 7. With mobile use soaring, photos are created on those devices and easily consumed and shared on them. Pinterest’s iPad app hit #1 in the Apple store in 1 day. Even as far back as 2009, over 2.5 billion devices with cameras were on the streets generating in just 1 year, 10% of the number of photos taken…ever. But let’s say you’re not a retailer. What if you’re a B2B whose products or services aren’t visual? Should you worry about your presence on Pinterest? As with all things, you need a keen awareness of who your audience is, where they reside online, and what they want to do there. If it doesn’t make sense to put a tent stake in Pinterest, fine. But ignore the power of pictures at your own peril. If not visually, how are you going to attention-grab social users scrolling down their News Feeds at top speed? You’re competing with every other cool image out there from countless content sources. Bore us and we’ll fly right past you.

    Read the article

  • Multichannel Digital Engagement: Find Out How Your Organization Measures Up

    - by Michael Snow
    This article was originally published in the September 2013 Edition of the Oracle Information InDepth Newsletter ORACLE WEBCENTER EDITION Thanks to mobile and social technologies, interactive online experiences are now commonplace. Not only that, they give consumers more choices, influence, and control than ever before. So how can you make your organization stand out? The key building blocks for delivering exceptional cross-channel digital experiences are outlined below. Also, a new assessment tool is available to help you measure your organization's ability to deliver such experiences. A clearly defined digital strategy. The customer journey is growing increasingly complex, encompassing multiple touchpoints and channels. It used to be easy to map marketing efforts to specific offline channels; for example, a direct mail piece with an offer to visit a store for a discounted purchase. Now it is more difficult to cultivate and track such clear cause-and-effect relationships. To deliver an integrated digital experience in this more complex world, organizations need a clearly defined and comprehensive digital marketing strategy that is backed up by an integrated set of software, middleware, and hardware solutions. Strong support for business agility and speed-to-market. As both IT and marketing executives know, speed-to-market and business agility are key to competitive advantage. That means marketers need solutions to support the rapid implementation of online marketing initiatives—plus the flexibility to adapt quickly to a changing marketplace. And IT needs tools with the performance, scalability, and ease of integration to support marketing efforts. Both teams benefit when business users are empowered to implement marketing initiatives on their own, with minimal IT intervention. The ability to deliver relevant, personalized content. Delivering a one-size-fits-all online customer experience is no longer acceptable. Customers expect you to know who they are, including their preferences and past relationship with your brand. That means delivering the most relevant content from the moment a visitor enters your site. To make that happen, you need a powerful rules engine so that marketers and business users can easily define site visitor segments and deliver content accordingly. That includes both implicit targeting that is based on the user’s behavior, and explicit targeting that takes a user’s profile information into account. Ideally, the rules engine can also intelligently weight recommendations when multiple segments apply to a specific customer. Support for social interactivity. With the advent of Facebook and LinkedIn, visitors expect to participate in and contribute to your web presence—and share their experience on their own social networks. That requires easy incorporation of user-generated content such as comments, ratings, reviews, polls, and blogs; seamless integration with third-party social networking sites; and support for social login, which helps to remove barriers to social participation. The ability to deliver connected, multichannel experiences that include powerful, flexible mobile capabilities. By 2015, mobile usage is projected to surpass that of PCs and other wired devices. In other words, mobile is an essential element in delivering exceptional online customer experiences. This requires the creation and management of mobile experiences that are optimized for delivery to the thousands of different devices that are in use today. Just as important, organizations must be able to easily extend their traditional web presence to the mobile channel and deliver highly personalized and relevant multichannel marketing initiatives while also managing to minimize the time and effort required to manage mobile sites. Are you curious to know how your organization measures up when it comes to delivering an engaging, multichannel digital experience? If so, take this brief, 15-question online assessment and see how your organization scores in the areas of digital strategy, digital agility, relevance and personalization, social interactivity, and multichannel experience.

    Read the article

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • Compiling Wanderlust for Windows and use it for Gmail.

    - by User1
    I'm trying to get Wanderlust working in Windows to connect to Gmail. Compiling the code is much more painful than expected. Here are the barriers so far: Can't download dependent packages: SEMI, APEL, and FLIM. I eventually found newer versions, but I'm not sure they will work. Anyone have the older versions? Needs make and install. I used MSYS and it seems to have compiled okay. SSL support. I was getting a "Cannot open load file: ssl" error. I found an ssl.el that comes with w3. So installed w3. Bash command in ssl.el: ssl-get-command is running something from /bin/sh (not a directory I have in Windows). I really don't want to refactor this code. Is there a better way? Others speak very highly of Wanderlust, so I want to give it a try. I feel like I'm almost there, but am pretty much worn out with all the crazy configuration I have to do. Does anyone have this working on Windows? I'm pretty sure it will work with Gmail, because of this post. But will it work in Windows too? If you have a few pointers, please help.

    Read the article

  • What is PocoCapsule current status?

    - by seas
    What is PocoCapsule current status? Is it evolving? Has it been forked with some other product? What is about the whole idea of IoC for C++? If PocoCapsule is not evolving, is it because IoC was considered not useful for C++, unsafe, other patterns appeared or something else? As far as I understand there are 2-3, maybe few more products, that implement IoC for C++, available and PocoCapsule is the most mature of them. I see several disadvantages in current version (as I see it's 1.1 from google code): No separate namespace. Header files are required to be right in INCLUDE folder - better to place them in subfolder. Generation Tools depend on Java. No static linking libraries are built by default. Cannot generate source code out of setup.xml for compilation and link with my app if I don't need reconfiguration feature. Does anybody have the same thoughts? Does anybody work on something of this list? Are there any barriers to start working, like patents?

    Read the article

  • Can Remote Desktop Services be deployed and administered by PowerShell alone, without a Domain in WIndows Server 2012 and 2012 R2?

    - by Warren P
    Windows Server 2008 R2 allowed deployment of Terminal Server (Remote Desktop Services) without a domain, and without any insistence on domains. This was very useful, especially for standalone virtual or cloud deployments of a server that is managed remotely for a remote client who has no need or desire for any ActiveDirectory or Domain features. This has become steadily more and more difficult as Microsoft restricts its technologies further and further in each Windows release. With Windows Server 2012, configuring licensing for Remote Desktop Services, is more difficult when not on a domain, but possible still. With Windows Server 2012 R2 (at least in the preview) the barriers are now severe: The Add/Remove Roles and Features wizard in Windows Server 2012 R2 has a special RDS deployment mode that has a rule that says if you aren't on a domain you can't deploy. It tells you to create or join a domain first. This of course comes in direct conflict with the fact that an Active Directory domain controller should not be the same machine as a terminal server machine. So Microsoft's technology is not such much a Cloud Operating System as a Cluster of Unwanted Nodes, needed to support the one machine I actually WANT to deploy. This is gross, and so I am trying to find a workaround. However if you skip that wizard and just go check the checkboxes in the main Roles/Features wizard, you can deploy the features, but the UI is not there to configure them, and when you go back to the RDS configuration page on the roles wizard, you get a message saying you can not administer your Remote Desktop Services system when you are logged in as a Local-Computer Administrator, because although you have all admin priveleges you could have (in your workgroup based system), the RDS configuration UI will not accept those credentials and let you continue. My question in brief is, can I still somehow, obtain the following end result: I need to allow 10-20 users per system to have an RDS (TS) session. I do not need any of the fancy pants RDS options, unless Microsoft somehow depends on those features being present. I believe I need the "RDS Session Host" as this is the guts of "Terminal Server". Microsoft says it is "full Windows desktop for Remote Desktop Services client. I need to configure licensing so that the Grace Period does not expire leaving my RDS non functional, so this probably means I need a way to configure TS CALs. If all of the above could technically be done with the judicious use of the PowerShell, I am prepared to even consider developing all the PowerShell scripts I would need to do the above. I'm not asking someone to write that for me. What I'm asking is, does anyone know if there is a technical impediment to what I want to do above, other than the deliberate crippling of the 2012 R2 UI for Workgroup users? Would the underlying technologies all still work if I manipulate and control them from a PowerShell script? Obviously a 1 word Yes or No answer isn't that useful to anyone, so the question is really, yes or no, and why? In the case the answer is Yes, then how.

    Read the article

  • Seriously, It’s Time to Get Your Content Act Together

    - by Mike Stiles
    Branded content, content marketing, social content, brand journalism, we’re seeing those terms more and more. Why? The technology tools are coming together. We should know. We can gather big data, crunch it, listen to the public, moderate, respond, get to know the customer intimately, know what they like, know what they want, we can target, distribute, amplify, measure engagement and reaction, modify strategy and even automate a great deal of all that. An amazing machine, a sleek, smooth-running engine has been built such that all the parts can interact and work together to deliver peak performance and maximum output. But that engine isn’t going anywhere without any gas. Content is the gas. Yes, we curate other people’s content. We can siphon their gas. There’s tech to help with that too. But as for the creation of original, worthwhile content made for a specific audience, our audience, machines can’t do that…at least not yet. Curated content is great. But somebody has to originate the content for it to be curated and shared. And since the need for good, curated content is obviously large and the desire to share is there, it’s a winning proposition for a brand to be a consistent producer of original content. And yet, it feels like content is an issue we’re avoiding. There’s a reluctance to build a massive pipeline if you have no idea what you’re going to run through it. The C-suite often doesn’t know what content is, that it’s different from ads, where to get it, who makes it, how long it should be, what the point of it is if there’s no hard sell of the product, what it costs, how to use it, how to measure it, how to make sure it’s good, or how to make sure it will keep flowing. It could be the reason many brands aren’t pulling the trigger on socially enabling the enterprise. And that’s a shame, because there are a lot of creative, daring, experimental, uniquely talented entertainers and journalists chomping at the bit to execute content for brands. But for many corporate executives, content is “weird,” and the people who make it are even weirder. The content side of the equation is human. It’s art, but art that can be informed by data. The natural inclination is for brands to turn to their agencies for such creative endeavors. But agencies are falling into one of two categories. They’re failing to transition from ads to content. In “Content Era, What’s the Role of Agencies?” Alexander Jutkowitz says agencies were made for one-hit campaigns, not ongoing content. Or, they’re ready and capable but can’t get clients to do the right things. Agencies have to make money, even if it means continuing to do the wrong things because that’s all the client will agree to. So what we wind up with in the pipeline is advertising, marketing-heavy content, content that was obviously created or spearheaded by non-creative executives, random & inconsistent content, copy written for SEO bots, and other completely uninteresting nightmares. Frank Rose, author of “The Art of Immersion,” writes, “Content without story and excitement is noise pollution.” In the old days, you made an ad and inserted it into shows made by people who knew what they were doing. You could bask in that show’s success and leverage their audience. Now, you are tasked with attracting, amassing and holding your own audience. You may just want to make, advertise and sell your widgets. But now there’s a war on for a precious commodity, attention. People are busy. They have filters to keep uninteresting and irrelevant things out. They value their time and expect value back when they give it up. Joe Pulizzi, founder of the Content Marketing Institute, says, "Your customers don't care about you, your products, your services…they care about themselves, their wants and their needs." Is it worth getting serious about content and doing it right? 61% of consumers feel better about a company that delivers custom content (Custom Content Council). Interesting content is one of the top 3 reasons people follow brands on social (Content+). 78% of consumers think organizations that provide custom content want to build good relationships with them (TMG Custom Media). On the B2B side, 80% of business decision makers prefer to get company info in a series of articles vs. an ad. So what’s the hang-up? Cited barriers to content marketing are lack of human resources (42%) and lack of budget (35%). 54% of brands don’t have a single on-site, dedicated content creator. And only 38% of brands have a content marketing strategy. Tech has built the biggest, most incredible stage for brands that’s ever been built. Putting something on that stage is your responsibility. Do a bad show, or no show at all, and you’ll be the beautiful, talented actress that never got discovered. @mikestilesPhoto: Gabriella Fabbri, stock.xchng

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

  • Access Services in SharePoint Server 2010

    - by Wayne
    Another SharePoint Server 2010 feature which cannot go unnoticed is the Access Services. Access Services is a service in SharePoint Server 2010 that allows administrators to view, edit, and configure a Microsoft access application within a Web Browser. Access Services settings support backup and recovery, regardless of whether there is a UI setting in Central Administration. However, backup and recovery only apply to service-level and administrative-level settings; end-user content from the Access application is not backed up as part of this process. Access Services has Windows PowerShell functionality that can be used to provide the service that uses settings from a previous backup; configure and manage macro and query setting; manage and configure session management; and configure all the global settings of the service. Key Benefits of SharePoint Server Access Services Easier Access to right tools: The enhanced, customizable Ribbon in Access 2010 makes it easy to uncover more commands so you can focus on the end product. The new Microsoft Office BackstageTM view is yet another feature that can help you easily analyze and document your database, share, publish, and customize your Access 2010 experience, all from one convenient location. Helps build database effortlessly and quickly: Out-of-the box templates and reusable components make Access Services the fastest, simplest database solution available. It helps find new pre-built templates which you can start using without customization or select templates created by your peers in the Access online community and customize them to meet your needs. It builds your databases with new modular components. New Application Parts enable you to add a set of common Access components, such as a table and form for task management, to your database in a few simple clicks. Database navigation is now simplified. It creates Navigation Forms and makes your frequently used forms and reports more accessible without writing any code or logic. Create Impactful forms and reports: Whether it's an inventory of your assets or customer sales database, Access 2010 brings the innovative tools you'd expect from Microsoft Office. Access Services easily spot trends and add emphasis to your data. It quickly create coordinating database forms and reports and bring the Web into your database. Obtain a centralized landing pad for your data: Access 2010 offers easy ways to bring your data together and help increase work quality. New technologies help break down barriers so you can share and work together on your databases, making you or your team more efficient and productive. Add automation and complex expressions: If you need a more robust database design, such as preventing record deletion if a specific condition is met or if you need to create calculations to forecast your budget, Access 2010 empowers you to be your own developer. The enhanced Expression Builder greatly simplifies your expression building experience with IntelliSense®. With the revamped Macro Designer, it's now even easier for you to add basic logic to your database. New Data Macros allow you to attach logic to your data, centralizing the logic on the table, not the objects that update your data. Key features of Access Services 2010 - Access database content through a Web browser: Newly added Access Services on Microsoft SharePoint Server 2010 enables you to make your databases available on the Web with new Web databases. Users without an Access client can open Web forms and reports via a browser and changes are automatically synchronized. - Simplify how you access the features you need: The Ribbon, improved in Access 2010, helps you access commands even more quickly by enabling you to customize or create your own tabs. The new Microsoft Office Backstage view replaces the traditional File menu to provide one central, organized location for all of your document management tasks. - Codeless navigation: Use professional looking web-like navigation forms to make frequently used forms and reports more accessible without writing any code or logic. - Easily reuse Access items in other databases: Use Application Parts to add pre-built Access components for common tasks to your database in a few simple clicks. You can also package common database components, such as data entry forms and reports for task management, and reuse them across your organization or other databases. - Simplified formatting: By using Office themes you can create coordinating professional forms and reports across your database. Simply select a familiar and great looking Office theme, or design your own, and apply it to your database. Newly created Access objects will automatically match your chosen theme.

    Read the article

  • Books are Dead! Long Live the Books!

    - by smisner
    We live in interesting times with regard to the availability of technical material. We have lots of free written material online in the form of vendor documentation online, forums, blogs, and Twitter. And we have written material that we can buy in the form of books, magazines, and training materials. Online videos and training – some free and some not free – are also an option. All of these formats are useful for one need or another. As an author, I pay particular attention to the demand for books, and for now I see no reason to stop authoring books. I assure you that I don’t get rich from the effort, and fortunately that is not my motivation. As someone who likes to refer to books frequently, I am still a big believer in books and have evidence from book sales that there are others like me. If I can do my part to help others learn about the technologies I work with, I will continue to produce content in a variety of formats, including books. (You can view a list of all of my books on the Publications page of my site and my online training videos at Pluralsight.) As a consumer of technical information, I prefer books because a book typically can get into a topic much more deeply than a blog post, and can provide more context than vendor documentation. It comes with a table of contents and a (hopefully accurate) index that helps me zero in on a topic of interest, and of course I can use the Search feature in digital form. Some people suggest that technology books are outdated as soon as they get published. I guess it depends on where you are with technology. Not everyone is able to upgrade to the latest and greatest version at release. I do assume, however, that the SQL Server 7.0 titles in my library have little value for me now, but I’m certain that the minute I discard the book, I’m going to want it for some reason! Meanwhile, as electronic books overtake physical books in sales, my husband is grateful that I can continue to build my collection digitally rather than physically as the books have a way of taking over significant square footage in our house! Blog posts, on the other hand, are useful for describing the scenarios that come up in real-life implementations that wouldn’t fit neatly into a book. As many years that I have working with the Microsoft BI stack, I still run into new problems that require creative thinking. Likewise, people who work with BI and other technologies that I use share what they learn through their blogs. Internet search engines help us find information in blogs that simply isn’t available anywhere else. Another great thing about blogs, also, is the connection to community and the dialog that can ensue between people with common interests. With the trend towards electronic formats for books, I imagine that we’ll see books continue to adapt to incorporate different forms of media and better ways to keep the information current. At the moment, I wish I had a better way to help readers with my last two Reporting Services books. In the case of the Microsoft® SQL Server™ 2005 Reporting Services Step by Step book, I have heard many cases of readers having problems with the sample database that shipped on CD – either the database was missing or it was corrupt. So I’ve provided a copy of the database on my site for download from http://datainspirations.com/uploads/rs2005sbsDW.zip. Then for the Microsoft® SQL Server™ 2008 Reporting Services Step by Step book, we decided to avoid the database problem by using the AdventureWorks2008 samples that Microsoft published on Codeplex (although code samples are still available on CD). We had this silly idea that the URL for the download would remain constant, but it seems that expectation was ill-founded. Currently, the sample database is found at http://msftdbprodsamples.codeplex.com/releases/view/37109 but I have no idea how long that will remain valid. My latest books (#9 and #10 which are milestones I never anticipated), Building Integrated Business Intelligence Solutions with SQL Server 2008 R2 and Office 2010 (McGraw Hill, 2011) and Business Intelligence in Microsoft SharePoint 2010 (Microsoft Press, 2011), will not ship with a CD, but will provide all code samples for download at a site maintained by the respective publishers. I expect that the URLs for the downloads for the book will remain valid, but there are lots of references to other sites that can change or disappear over time. Does that mean authors shouldn’t make reference to such sites? Personally, I think the benefits to be gained from including links are greater than the risks of the links becoming invalid at some point. Do you think the time for technology books has come to an end? Is the delivery of books in electronic format enough to keep them alive? If technological barriers were no object, what would make a book more valuable to you than other formats through which you can obtain information?

    Read the article

  • Xml failing to deserialise

    - by Carnotaurus
    I call a method to get my pages [see GetPages(String xmlFullFilePath)]. The FromXElement method is supposed to deserialise the LitePropertyData elements to strongly type LitePropertyData objects. Instead it fails on the following line: return (T)xmlSerializer.Deserialize(memoryStream); and gives the following error: <LitePropertyData xmlns=''> was not expected. What am I doing wrong? I have included the methods that I call and the xml data: public static T FromXElement<T>(this XElement xElement) { using (var memoryStream = new MemoryStream(Encoding.ASCII.GetBytes(xElement.ToString()))) { var xmlSerializer = new XmlSerializer(typeof(T)); return (T)xmlSerializer.Deserialize(memoryStream); } } public static List<LitePageData> GetPages(String xmlFullFilePath) { XDocument document = XDocument.Load(xmlFullFilePath); List<LitePageData> results = (from record in document.Descendants("row") select new LitePageData { Guid = IsValid(record, "Guid") ? record.Element("Guid").Value : null, ParentID = IsValid(record, "ParentID") ? Convert.ToInt32(record.Element("ParentID").Value) : (Int32?)null, Created = Convert.ToDateTime(record.Element("Created").Value), Changed = Convert.ToDateTime(record.Element("Changed").Value), Name = record.Element("Name").Value, ID = Convert.ToInt32(record.Element("ID").Value), LitePageTypeID = IsValid(record, "ParentID") ? Convert.ToInt32(record.Element("ParentID").Value) : (Int32?)null, Html = record.Element("Html").Value, FriendlyName = record.Element("FriendlyName").Value, Properties = record.Element("Properties") != null ? record.Element("Properties").Element("LitePropertyData").FromXElement<List<LitePropertyData>>() : new List<LitePropertyData>() }).ToList(); return results; } Here is the xml: <?xml version="1.0" encoding="utf-8"?> <root> <rows> <row> <ID>1</ID> <ImageUrl></ImageUrl> <Html>Home page</Html> <Created>01-01-2012</Created> <Changed>01-01-2012</Changed> <Name>Home page</Name> <FriendlyName>home-page</FriendlyName> </row> <row xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Guid>edeaf468-f490-4271-bf4d-be145bc6a1fd</Guid> <ID>8</ID> <Name>Unused</Name> <ParentID>1</ParentID> <Created>2006-03-25T10:57:17</Created> <Changed>2012-07-17T12:24:30.0984747+01:00</Changed> <ChangedBy /> <LitePageTypeID xsi:nil="true" /> <Html> What is the purpose of this option? This option checks the current document for accessibility issues. It uses Bobby to provide details of whether the current web page conforms to W3C's WCAG criteria for web content accessibility. Issues with Bobby and Cynthia Bobby and Cynthia are free services that supposedly allow a user to expose web page accessibility barriers. It is something of a guide but perhaps a blunt instrument. I tested a few of the webpages that I have designed. Sure enough, my pages fall short and for good reason. I am not about to claim that Bobby and Cynthia are useless. Although it is useful and commendable tool, it project appears to be overly ambitious. Nevertheless, let me explain my issues with Bobby and Cynthia: First, certain W3C standards for designing web documents are often too strict and unworkable. For instance, in some versions W3C standards for HTML, certain tags should not include a particular attribute, whereas in others they are requisite if the document is to be ???well-formed???. The standard that a designer chooses is determined usually by the requirements specification document. This specifies which browsers and versions of those browsers that the web page is expected to correctly display. Forcing a hypertext document to conform strictly to a specific W3C standard for HTML is often no simple task. In the worst case, it cannot conform without losing some aesthetics or accessibility functionality. Second, the case of HTML documents is not an isolated case. Standards for XML, XSL, JavaScript, VBScript, are analogous. Therefore, you might imagine the problems when you begin to combine these languages and formats in an HTML document. Third, there is always more than one way to skin a cat. For example, Bobby and Cynthia may flag those IMG tags that do not contain a TITLE attribute. There might be good reason that a web developer chooses not to include the title attribute. The title attribute has a limited numbers of characters and does not support carriage returns. This is a major defect in the design of this tag. In fact, before the TITLE attribute was supported, there was the ALT attribute. Most browsers support both, yet they both perform a similar function. However, both attributes share the same deficiencies. In practice, there are instances where neither attribute would be used. Instead, for example, the developer would write some JavaScript or VBScript to circumvent these deficiencies. The concern is that Bobby and Cynthia would not notice this because it does not ???understand??? what the JavaScript does. </Html> <FriendlyName>unused</FriendlyName> <IsDeleted>false</IsDeleted> <Properties> <LitePropertyData> <Description>Image for the page</Description> <DisplayEditUI>true</DisplayEditUI> <OwnerTab>1</OwnerTab> <DisplayName>Image Url</DisplayName> <FieldOrder>1</FieldOrder> <IsRequired>false</IsRequired> <Name>ImageUrl</Name> <IsModified>false</IsModified> <ParentPageID>3</ParentPageID> <Type>String</Type> <Value xsi:type="xsd:string">smarter.jpg</Value> </LitePropertyData> <LitePropertyData> <Description>WebItemApplicationEnum</Description> <DisplayEditUI>true</DisplayEditUI> <OwnerTab>1</OwnerTab> <DisplayName>WebItemApplicationEnum</DisplayName> <FieldOrder>1</FieldOrder> <IsRequired>false</IsRequired> <Name>WebItemApplicationEnum</Name> <IsModified>false</IsModified> <ParentPageID>3</ParentPageID> <Type>Number</Type> <Value xsi:type="xsd:string">1</Value> </LitePropertyData> </Properties> <Seo> <Author>Phil Carney</Author> <Classification /> <Copyright>Carnotaurus</Copyright> <Description> What is the purpose of this option? This option checks the current document for accessibility issues. It uses Bobby to provide details of whether the current web page conforms to W3C's WCAG criteria for web content accessibility. Issues with Bobby and Cynthia Bobby and Cynthia are free services that supposedly allow a user to expose web page accessibility barriers. It is something of a guide but perhaps a blunt instrument. I tested a few of the webpages that I have designed. Sure enough, my pages fall short and for good reason. I am not about to claim that Bobby and Cynthia are useless. Although it is useful and commendable tool, it project appears to be overly ambitious. Nevertheless, let me explain my issues with Bobby and Cynthia: First, certain W3C standards for designing web documents are often too strict and unworkable. For instance, in some versions W3C standards for HTML, certain tags should not include a particular attribute, whereas in others they are requisite if the document is to be ???well-formed???. The standard that a designer chooses is determined usually by the requirements specification document. This specifies which browsers and versions of those browsers that the web page is expected to correctly display. Forcing a hypertext document to conform strictly to a specific W3C standard for HTML is often no simple task. In the worst case, it cannot conform without losing some aesthetics or accessibility functionality. Second, the case of HTML documents is not an isolated case. Standards for XML, XSL, JavaScript, VBScript, are analogous. Therefore, you might imagine the problems when you begin to combine these languages and formats in an HTML document. Third, there is always more than one way to skin a cat. For example, Bobby and Cynthia may flag those IMG tags that do not contain a TITLE attribute. There might be good reason that a web developer chooses not to include the title attribute. The title attribute has a limited numbers of characters and does not support carriage returns. This is a major defect in the design of this tag. In fact, before the TITLE attribute was supported, there was the ALT attribute. Most browsers support both, yet they both perform a similar function. However, both attributes share the same deficiencies. In practice, there are instances where neither attribute would be used. Instead, for example, the developer would write some JavaScript or VBScript to circumvent these deficiencies. The concern is that Bobby and Cynthia would not notice this because it does not ???understand??? what the JavaScript does. </Description> <Keywords>unused</Keywords> <Title>unused</Title> </Seo> </row> </rows> </root> EDIT Here are my entities: public class LitePropertyData { public virtual string Description { get; set; } public virtual bool DisplayEditUI { get; set; } public int OwnerTab { get; set; } public virtual string DisplayName { get; set; } public int FieldOrder { get; set; } public bool IsRequired { get; set; } public string Name { get; set; } public virtual bool IsModified { get; set; } public virtual int ParentPageID { get; set; } public LiteDataType Type { get; set; } public object Value { get; set; } } [Serializable] public class LitePageData { public String Guid { get; set; } public Int32 ID { get; set; } public String Name { get; set; } public Int32? ParentID { get; set; } public DateTime Created { get; set; } public String CreatedBy { get; set; } public DateTime Changed { get; set; } public String ChangedBy { get; set; } public Int32? LitePageTypeID { get; set; } public String Html { get; set; } public String FriendlyName { get; set; } public Boolean IsDeleted { get; set; } public List<LitePropertyData> Properties { get; set; } public LiteSeoPageData Seo { get; set; } /// <summary> /// Saves the specified XML full file path. /// </summary> /// <param name="xmlFullFilePath">The XML full file path.</param> public void Save(String xmlFullFilePath) { XDocument doc = XDocument.Load(xmlFullFilePath); XElement demoNode = this.ToXElement<LitePageData>(); demoNode.Name = "row"; doc.Descendants("rows").Single().Add(demoNode); doc.Save(xmlFullFilePath); } }

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >