Search Results

Search found 133749 results on 5350 pages for 'user data'.

Page 80/5350 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Top Reasons You Need A User Engagement Platform

    - by Michael Snow
    Guest post by: Amit Sircar, Senior Sales Consultant, Oracle Deliver complex enterprise functionality through a simple intuitive and unified User Interface (UI) The modern enterprise contains a wide range of applications that are used to manage the business and drive competitive advantages. Organizations respond by creating a complex structure that results in a functional and management grouping of users. Each of these groups of users requires access to multiple applications and information sources in order to perform their job functions. This leads to the lack of a unified view of enterprise information, inconsistent user interfaces and disjointed security. To be effective, portals must be designed from the end-user perspective, enabling the user to accomplish as many tasks as possible while visiting the fewest number of portals. This requires rethinking the way that portals are built, moving from a functional business unit perspective to a user-focused, process-oriented point of view. Oracle WebCenter provides the Common User Experience Architecture that allows organizations to seamlessly present a unified view of enterprise information tailored to a particular user’s role and preferences. This architecture provides the best practices, design patterns and delivery mechanism for myriad services, applications, and data sources.  In order to serve as a primary system of access, Oracle WebCenter also provides access to unstructured content and to other users via integrated search, service-oriented artifacts, content management, and collaboration tools. Provide a modern and engaging experience without modifying the core business application Web 2.0 technologies such as blogs, wikis, forums or social media sites are having a profound impact in the public internet.  These technologies can be leveraged by enterprises to add significant value to the business. Organizations need to integrate these technologies directly into their business applications while continuing to meet their security and governance needs. To deliver richer connections and become a more agile and intelligent business, WebCenter provides an enterprise portal platform that contains pre-integrated, standards-based Enterprise 2.0 services. These Enterprise 2.0 services can be easily accessed, integrated and utilized by users. By giving users the ability to use and integrate Enterprise 2.0 services such as tags, links, wikis, activities, blogs or social networking directly with their portals and applications, they are empowered to make richer connections, optimize their productivity, and ultimately increase the value of their applications. Foster a collaborative experience The organizational workplace has undergone a major change in the last decade. With increasing globalization and a distributed workforce, project teams may be physically separated by large distances. Online collaboration technologies are becoming a critical resource to enable virtual teams to share information and work together effectively. Oracle WebCenter delivers dynamic business communities with rich Services to empower teams to quickly and efficiently manage their information, applications, projects, and people without requiring IT assistance. It brings together the latest technology around Enterprise 2.0 and social computing, communities, personal productivity, and ad-hoc team interactions without any development effort. It enables the sharing and collaboration on team content, focusing an organization’s valuable resources on solving business problems, tapping into new ideas, and reducing time-to-market. Mobile Support The traditional workplace dynamics that required employees to access their work applications from their desktops have undergone a fundamental shift. Employees were used to primarily working from company offices and utilized an IT-issued computer for performing their job functions. With the introduction of flexible work hours and the growth of remote workers, more and more employees need the ability to remain productive even when they do not have access to a computer via the use of tablets and smartphones.  In addition, customers and citizens have come to expect 24x7 access to resources and websites from wherever they are located. Tablets and smartphones have empowered everyone to quickly access services they need anytime and from any place.  WebCenter provides out of the box capabilities to deliver the mobile experience in a seamless manner. Seeded device profiles and toolkits within WebCenter can be used to render the same web pages into multiple target devices such iPads, iPhones and android devices. Web designers can preview the portal using the built in simulator, make necessary updates and then deploy their UI design for the targeted device. Conclusion The competitive economy and resource constraints facing organizations today require them to find ways to make their applications, portals and Web sites more agile and intelligent and their knowledge workers more productive no matter where they are located. Organizations need to provide faster access to relevant information and resources, enhance existing applications and business processes with rich Enterprise 2.0 services, and seamlessly deliver content to mobile platforms. Oracle WebCenter successfully meets these challenges by providing the modern user experience platform for the enterprise and the Web.

    Read the article

  • My Take on Hadoop World 2011

    - by Jean-Pierre Dijcks
    I’m sure some of you have read pieces about Hadoop World and I did see some headlines which were somewhat, shall we say, interesting? I thought the keynote by Larry Feinsmith of JP Morgan Chase & Co was one of the highlights of the conference for me. The reason was very simple, he addressed some real use cases outside of internet and ad platforms. The following are my notes, since the keynote was recorded I presume you can go and look at Hadoopworld.com at some point… On the use cases that were mentioned: ETL – how can I do complex data transformation at scale Doing Basel III liquidity analysis Private banking – transaction filtering to feed [relational] data marts Common Data Platform – a place to keep data that is (or will be) valuable some day, to someone, somewhere 360 Degree view of customers – become pro-active and look at events across lines of business. For example make sure the mortgage folks know about direct deposits being stopped into an account and ensure the bank is pro-active to service the customer Treasury and Security – Global Payment Hub [I think this is really consolidation of data to cross reference activity across business and geographies] Data Mining Bypass data engineering [I interpret this as running a lot of a large data set rather than on samples] Fraud prevention – work on event triggers, say a number of failed log-ins to the website. When they occur grab web logs, firewall logs and rules and start to figure out who is trying to log in. Is this me, who forget his password, or is it someone in some other country trying to guess passwords Trade quality analysis – do a batch analysis or all trades done and run them through an analysis or comparison pipeline One of the key requests – if you can say it like that – was for vendors and entrepreneurs to make sure that new tools work with existing tools. JPMC has a large footprint of BI Tools and Big Data reporting and tools should work with those tools, rather than be separate. Security and Entitlement – how to protect data within a large cluster from unwanted snooping was another topic that came up. I thought his Elephant ears graph was interesting (couldn’t actually read the points on it, but the concept certainly made some sense) and it was interesting – when asked to show hands – how the audience did not (!) think that RDBMS and Hadoop technology would overlap completely within a few years. Another interesting session was the session from Disney discussing how Disney is building a DaaS (Data as a Service) platform and how Hadoop processing capabilities are mixed with Database technologies. I thought this one of the best sessions I have seen in a long time. It discussed real use case, where problems existed, how they were solved and how Disney planned some of it. The planning focused on three things/phases: Determine the Strategy – Design a platform and evangelize this within the organization Focus on the people – Hire key people, grow and train the staff (and do not overload what you have with new things on top of their day-to-day job), leverage a partner with experience Work on Execution of the strategy – Implement the platform Hadoop next to the other technologies and work toward the DaaS platform This kind of fitted with some of the Linked-In comments, best summarized in “Think Platform – Think Hadoop”. In other words [my interpretation], step back and engineer a platform (like DaaS in the Disney example), then layer the rest of the solutions on top of this platform. One general observation, I got the impression that we have knowledge gaps left and right. On the one hand are people looking for more information and details on the Hadoop tools and languages. On the other I got the impression that the capabilities of today’s relational databases are underestimated. Mostly in terms of data volumes and parallel processing capabilities or things like commodity hardware scale-out models. All in all I liked this conference, it was great to chat with a wide range of people on Oracle big data, on big data, on use cases and all sorts of other stuff. Just hope they get a set of bigger rooms next time… and yes, I hope I’m going to be back next year!

    Read the article

  • Oracle's Fusion User Experience Raises the Bar

    Hear Jeremy Ashley, Oracle's Vice President of Applications User Experience, and Patanjali Venkatacharya, Applications User Experience Architect, speak with Cliff about Oracle's innovative user experience methodology and the benefits it provides customers.

    Read the article

  • Is using Capistrano for user maintenance tasks on university lab feasible?

    - by danielkza
    I've been looking around for tools to replace some legacy scripts for creating and maintaining accounts in a university computer lab ecosystem consisting of things like: LDAP and Kerberos for authentication User home storage and web pages Entries on an SQL database Printing quotas Mailing lists, etc. I'd also like to automate machine and VM membership for Kerberos and Puppet if possiible. I've found Capistrano, and while the basic principle of running tasks on remote hosts through SSH seems to fit, and the DSL in Ruby looks quite nice, I've found most documentation is related to application deployment, not generic tasks. I'm also not aware of any good way to parameterize tasks so I can pass on the user information for creation. Is something about Capistrano I am missing, or is it not the correct tool for this job? Are there any more userful alternatives?

    Read the article

  • Terminal error messages: bash: /dev/cgroup/cpu/user/2823/tasks: No such file or directory

    - by sasaenator
    This is what I get when I start up the terminal: bash: /dev/cgroup/cpu/user/2823/tasks: No such file or directory bash: /dev/cgroup/cpu/user/2823/notify_on_release: No such file or directory bash: /dev/cgroup/cpu/user/2823/tasks: No such file or directory bash: /dev/cgroup/cpu/user/2823/notify_on_release: No such file or directory sasa@sasa:~$*** I reinstalled 10.10 yesterday because of other problems, I didn't have this error message before. I have a separate /home partition, and new installation picked up almost all of the old settings, also those which I don't like, but it looks like that is not a problem or maybe I am wrong? Wouldn't ask if I knew! :) I'll be glad to post more info if someone needs it!

    Read the article

  • How to restrict user to a particular folder in CentOS 6?

    - by Chris Demetriad
    I will need to create users so developers can log in and clone/pull/push changes/repositories from a github like platform. I've managed to add a user (using the root) to this CentOS machine; I now have this line in /etc/passwd: chris:x:32008:32010::/home/chris/public_html:/bin/bash ..and this in /etc/shadow: chris:$1$ruUeLtTu$onAY2hdu1J.UmHajEIlmR.:15385:0:99999:7::: I am able to SSH the server, I have permission to create a folder and I guess that should be enough. But I am able to see other files and folders outside public_html. How can I actually restrict the user to a particular directory so he can't "cd out" of his folder? Update: root@echo [~]# ls -ld /home/moove drwx--x--x 21 moove moove 4096 Mar 22 16:16 /home/moove/ root@echo [~]# ls -ld /home/moove/public_html drwxr-x--- 11 moove nobody 4096 Mar 27 11:29 /home/moove/public_html/ root@echo [~]# ls -ld /home/moove/public_html/dev drwxr-x--- 12 moove nobody 4096 Mar 27 14:47 /home/moove/public_html/dev/ root@echo [~]# ls -ld /home/moove/public_html/dev/arsenal drwxr-xr-x 3 arsenal moove 4096 Mar 27 14:53 /home/moove/public_html/dev/arsenal/

    Read the article

  • Enabling super single user mode with SQL Server

    - by simonsabin
    I recently got an email from a fellow MVP about single user mode. It made me think about some features I had just been looking at and so I started playing. The annoyance about single user mode for SQL Server is that its not really single user, but more like single connection mode. So how can you get round it, well there is extension to the -m startup option that allows you to specify an application name, and only connections with that application name can connect. This is very useful if you have...(read more)

    Read the article

  • Security Controls on data for P6 Analytics

    - by Jeffrey McDaniel
    The Star database and P6 Analytics calculates security based on P6 security using OBS, global, project, cost, and resource security considerations. If there is some concern that users are not seeing expected data in P6 Analytics here are some areas to review: 1. Determining if a user has cost security is based on the Project level security privileges - either View Project Costs/Financials or Edit EPS Financials. If expecting to see costs make sure one of these permissions are allocated.  2. User must have OBS access on a Project. Not WBS level. WBS level security is not supported. Make sure user has OBS on project level.  3. Resource Access is determined by what is granted in P6. Verify the resource access granted to this user in P6. Resource security is hierarchical. Project access will override Resource access based on the way security policies are applied. 4. Module access must be given to a P6 user for that user to come over into Star/P6 Analytics. For earlier version of RDB there was a report_user_flag on the Users table. This flag field is no longer used after P6 Reporting Database 2.1. 5. For P6 Reporting Database versions 2.2 and higher, the Extended Schema Security service must be run to calculate all security. Any changes to privileges or security this service must be rerun before any ETL. 6. In P6 Analytics 2.0 or higher, a Weblogic user must exist that matches the P6 username. For example user Tim must exist in P6 and Weblogic users for Tim to be able to log into P6 Analytics and access data based on  P6 security.  In earlier versions the username needed to exist in RPD. 7. Cache in OBI is another area that can sometimes make it seem a user isn't seeing the data they expect. While cache can be beneficial for performance in OBI. If the data is outdated it can retrieve older, stale data. Clearing or turning off cache when rerunning a query can determine if the returned result set was from cache or from the database.

    Read the article

  • Sort Data in Windows Phone using Collection View Source

    - by psheriff
    When you write a Windows Phone application you will most likely consume data from a web service somewhere. If that service returns data to you in a sort order that you do not want, you have an easy alternative to sort the data without writing any C# or VB code. You use the built-in CollectionViewSource object in XAML to perform the sorting for you. This assumes that you can get the data into a collection that implements the IEnumerable or IList interfaces.For this example, I will be using a simple Product class with two properties, and a list of Product objects using the Generic List class. Try this out by creating a Product class as shown in the following code:public class Product {  public Product(int id, string name)   {    ProductId = id;    ProductName = name;  }  public int ProductId { get; set; }  public string ProductName { get; set; }}Create a collection class that initializes a property called DataCollection with some sample data as shown in the code below:public class Products : List<Product>{  public Products()  {    InitCollection();  }  public List<Product> DataCollection { get; set; }  List<Product> InitCollection()  {    DataCollection = new List<Product>();    DataCollection.Add(new Product(3,        "PDSA .NET Productivity Framework"));    DataCollection.Add(new Product(1,        "Haystack Code Generator for .NET"));    DataCollection.Add(new Product(2,        "Fundamentals of .NET eBook"));    return DataCollection;  }}Notice that the data added to the collection is not in any particular order. Create a Windows Phone page and add two XML namespaces to the Page.xmlns:scm="clr-namespace:System.ComponentModel;assembly=System.Windows"xmlns:local="clr-namespace:WPSortData"The 'local' namespace is an alias to the name of the project that you created (in this case WPSortData). The 'scm' namespace references the System.Windows.dll and is needed for the SortDescription class that you will use for sorting the data. Create a phone:PhoneApplicationPage.Resources section in your Windows Phone page that looks like the following:<phone:PhoneApplicationPage.Resources>  <local:Products x:Key="products" />  <CollectionViewSource x:Key="prodCollection"      Source="{Binding Source={StaticResource products},                       Path=DataCollection}">    <CollectionViewSource.SortDescriptions>      <scm:SortDescription PropertyName="ProductName"                           Direction="Ascending" />    </CollectionViewSource.SortDescriptions>  </CollectionViewSource></phone:PhoneApplicationPage.Resources>The first line of code in the resources section creates an instance of your Products class. The constructor of the Products class calls the InitCollection method which creates three Product objects and adds them to the DataCollection property of the Products class. Once the Products object is instantiated you now add a CollectionViewSource object in XAML using the Products object as the source of the data to this collection. A CollectionViewSource has a SortDescriptions collection that allows you to specify a set of SortDescription objects. Each object can set a PropertyName and a Direction property. As you see in the above code you set the PropertyName equal to the ProductName property of the Product object and tell it to sort in an Ascending direction.All you have to do now is to create a ListBox control and set its ItemsSource property to the CollectionViewSource object. The ListBox displays the data in sorted order by ProductName and you did not have to write any LINQ queries or write other code to sort the data!<ListBox    ItemsSource="{Binding Source={StaticResource prodCollection}}"   DisplayMemberPath="ProductName" />SummaryIn this blog post you learned that you can sort any data without having to change the source code of where the data comes from. Simply feed the data into a CollectionViewSource in XAML and set some sort descriptions in XAML and the rest is done for you! This comes in very handy when you are consuming data from a source where the data is given to you and you do not have control over the sorting.NOTE: You can download this article and many samples like the one shown in this blog entry at my website. http://www.pdsa.com/downloads. Select “Tips and Tricks”, then “Sort Data in Windows Phone using Collection View Source” from the drop down list.Good Luck with your Coding,Paul Sheriff** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • Connect to QuickBooks from PowerBuilder using RSSBus ADO.NET Data Provider

    - by dataintegration
    The RSSBus ADO.NET providers are easy-to-use, standards based controls that can be used from any platform or development technology that supports Microsoft .NET, including Sybase PowerBuilder. In this article we show how to use the RSSBus ADO.NET Provider for QuickBooks in PowerBuilder. A similar approach can be used from PowerBuilder with other RSSBus ADO.NET Data Providers to access data from Salesforce, SharePoint, Dynamics CRM, Google, OData, etc. In this article we will show how to create a basic PowerBuilder application that performs CRUD operations using the RSSBus ADO.NET Provider for QuickBooks. Step 1: Open PowerBuilder and create a new WPF Window Application solution. Step 2: Add all the Visual Controls needed for the connection properties. Step 3: Add the DataGrid control from the .NET controls. Step 4:Configure the columns of the DataGrid control as shown below. The column bindings will depend on the table. <DataGrid AutoGenerateColumns="False" Margin="13,249,12,14" Name="datagrid1" TabIndex="70" ItemsSource="{Binding}"> <DataGrid.Columns> <DataGridTextColumn x:Name="idColumn" Binding="{Binding Path=ID}" Header="ID" Width="SizeToHeader" /> <DataGridTextColumn x:Name="nameColumn" Binding="{Binding Path=Name}" Header="Name" Width="SizeToHeader" /> ... </DataGrid.Columns> </DataGrid> Step 5:Add a reference to the RSSBus ADO.NET Provider for QuickBooks assembly. Step 6:Optional: Set the QBXML Version to 6. Some of the tables in QuickBooks require a later version of QuickBooks to support updates and deletes. Please check the help for details. Connect the DataGrid: Once the visual elements have been configured, developers can use standard ADO.NET objects like Connection, Command, and DataAdapter to populate a DataTable with the results of a SQL query: System.Data.RSSBus.QuickBooks.QuickBooksConnection conn conn = create System.Data.RSSBus.QuickBooks.QuickBooksConnection(connectionString) System.Data.RSSBus.QuickBooks.QuickBooksCommand comm comm = create System.Data.RSSBus.QuickBooks.QuickBooksCommand(command, conn) System.Data.DataTable table table = create System.Data.DataTable System.Data.RSSBus.QuickBooks.QuickBooksDataAdapter dataAdapter dataAdapter = create System.Data.RSSBus.QuickBooks.QuickBooksDataAdapter(comm) dataAdapter.Fill(table) datagrid1.ItemsSource=table.DefaultView The code above can be used to bind data from any query (set this in command), to the DataGrid. The DataGrid should have the same columns as those returned from the SELECT statement. PowerBuilder Sample Project The included sample project includes the steps outlined in this article. You will also need the QuickBooks ADO.NET Data Provider to make the connection. You can download a free trial here.

    Read the article

  • data maintenance/migrations in image based sytems

    - by User
    Web applications usually have a database. The code and the database work hand in hand together. Therefore Frameworks like Ruby on Rails and Django create migration files Sure there are also servers written in Self or Smalltalk or other image-based systems that face the same problem: Code is not written on the server but in a separate image of the programmer. How do these systems deal with a changing schema, changing classes/prototypes. Which way do the migrations go? Example: What is the process of a new attribute going from programmer's idea to the server code and all objects? I found the Gemstone/S manual chapter 8 but it does not really talk about the process of shipping code to the server.

    Read the article

  • Performing user authentication in a CodeIgniter controller constructor?

    - by msanford
    In "The Clean Code Talks -- Unit Testing" (http://youtu.be/wEhu57pih5w), Miško Hevery mentions that "as little work as possible should be done in constructors [to make classes more easily testable]'. It got me thinking about the way I have implemented my user authentication mechanism. Having delved into MVC development through CodeIgniter, I designed my first web application to perform user authentication for protected resources in controllers' constructors in cases where every public function in that controller requires the user to be authenticated. For controllers with public methods having mixed authentication requirements, I would naturally move the authentication from the constructor to each method requiring authentication (though I don't currently have a need for this). I made this choice primarily to keep the controller tight, and to ensure that all resources in the controller are always covered. As for code longevity and maintainability: given the application structure, I can't foresee a situation in which one of the affected controllers would need a public method that didn't require user authentication, but I can see this as a potential drawback in general with this implementation (i.e., requiring future refactoring). Is this a good idea?

    Read the article

  • The need for user-defined index types

    - by Greg Low
    Since the removal of the 8KB limit on serialization, the ability to define new data types using SQL CLR integration is now almost at a usable level, apart from one key omission: indexes. We have no ability to create our own types of index to support our data types. As a good example of this, consider that when Microsoft introduced the geometry and geography (spatial) data types, they did so as system CLR data types but also needed to introduce a spatial index as a new type of index. Those of us that...(read more)

    Read the article

  • Download Instagram photos of any user on Windows, Mac & Linux using 4K Stogram

    - by Gopinath
    Instagram is one of most popular mobile applications used to take pictures and share them online. Initially released for iOS devices, Instagram quickly won the hearts of photographers and made into top 10 apps charts in Apple App Store. With a simple interface and quick photo processing features Instagram popularity grown by leaps and bounds. Few months ago after releasing an Android application,  Facebook acquired Instagram for around 1 billion dollars. Undoubtedly Instagram is the most popular photo sharing app on mobile devices and it’s where you find best photographers share their beautiful pictures online. Being the best source of photographs many users love to browse and download photographs. For those who are looking for downloading photographs from Instagram here is an excellent free desktop application – 4K Stogram. Once installed 4K Stogram, you will be able to quickly browse and download photographs of any user by just entering user name in search box and clicking on Follow button. The app quickly searches for the recent photographs and displays the thumbnails as it downloads them. You can double click on any photograph to view it with your Operating Systems default photograph viewer. Open the location of the photograph and you will find the rest of downloaded photos in the same folder. Once you enter name of the user, 4K Stogram automatically keeps tracking the user until you manually delete it. Also it automatically downloads the latest photographs when its launched again. Like this you can track multiple users and download their photographs Here is a quick run down of 4K Stogram features View and download photos of Instagram users No need to have Instagram account, just enter user name and download all photos Track multiple users and download their photos Automatically download latest photos All new photos are marked with an indictor for quick reference Free and open source desktop application that runs on Windows, Mac & Linux Link to 4K Stogram Application website

    Read the article

  • Swiss Re increases data warehouse performance and deploys in record time

    - by KLaker
    Great information on yet another data warehouse deployment on Exadata. A little background on Swiss Re: In 2002, Swiss Re established a data warehouse for its client markets and products to gather reinsurance information across all organizational units into an integrated structure. The data warehouse provided the basis for reporting at the group level with drill-down capability to individual contracts, while facilitating application integration and data exchange by using common data standards. Initially focusing on property and casualty reinsurance information only, it now includes life and health reinsurance, insurance, and nonlife insurance information. Key highlights of the benefits that Swiss Re achieved by using Exadata: Reduced the time to feed the data warehouse and generate data marts by 58% Reduced average runtime by 24% for standard reports comfortably loading two data warehouse refreshes per day with incremental feeds Freed up technical experts by significantly minimizing time spent on tuning activities Most importantly this was one of the fastest project deployments in Swiss Re's history. They went from installation to production in just four months! What is truly surprising is the that it only took two weeks between power-on to testing the machine with full data volumes! Business teams at Swiss Re are now able to fully exploit up-to-date analytics across property, casualty, life, health insurance, and reinsurance lines to identify successful products. These points are highlighted in the following quotes from Dr. Stephan Gutzwiller, Head of Data Warehouse Services at Swiss Re:  "We were operating a complete Oracle stack, including servers, storage area network, operating systems, and databases that was well optimized and delivered very good performance over an extended period of time. When a hardware replacement was scheduled for 2012, Oracle Exadata was a natural choice—and the performance increase was impressive. It enabled us to deliver analytics to our internal customers faster, without hiring more IT staff" “The high quality data that is readily available with Oracle Exadata gives us the insight and agility we need to cater to client needs. We also can continue re-engineering to keep up with the increasing demand without having to grow the organization. This combination creates excellent business value.” Our full press release is available here: http://www.oracle.com/us/corporate/customers/customersearch/swiss-re-1-exadata-ss-2050409.html. If you want more information about how Exadata can increase the performance of your data warehouse visit our home page: http://www.oracle.com/us/products/database/exadata-database-machine/overview/index.html

    Read the article

  • CPanel - How to stop Apache from running as root user?

    - by ambu
    <?php echo `whoami`; ?> So this is returning 'root' and I don't know how to prevent it. I'm using WebHost Manager / CPanel which is supposed to create multiple users/vhosts and have Apache spawn it's process as that user/group. This isn't happening. If I login to WHM and open the PHP and SuExec Configuration section, my settings are: Default PHP Version (.php files) 5 PHP 5 Handler cgi PHP 4 Handler none Apache suEXEC on What's wrong? How can I get Apache to run as the correct user rather than root?

    Read the article

  • How To Create a Shortcut That Lets a Standard User Run An Application as Administrator

    - by Chris Hoffman
    Want to allow a standard user account to run an application as administrator without a UAC or password prompt? You can easily create a shortcut that uses the runas command with the /savecred switch, which saves the password. Note that using /savecred could be considered a security hole – a standard user will be able to use the runas /savecred command to run any command as administrator without entering a password. However, it’s still useful for situations where this doesn’t matter much – perhaps you want to allow a child’s standard user account to run a game as Administrator without asking you. We’ve also covered allowing a user to run an application as Administrator with no UAC prompts by creating a scheduled task. HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks

    Read the article

  • How long for data highlighter mark up to appear in structured data tool?

    - by Max
    I used the data highlighter in webmaster tools over 3 weeks ago to mark up some local business data, but there is still no structured data being detected in webmaster tools. Does any body have any experience on approx how long it takes for Google Webmaster Tools to start reporting Structured Data that has been marked up with their data highlighter? I'm asking specifically about reporting on it in Web Master Tools Structured Data section, as opposed to actually appearing in the SERPs.

    Read the article

  • EBS Seed Data Comparison Reports Now Available

    - by Steven Chan (Oracle Development)
    Earlier this year we released a reporting tool that reports on the differences in E-Business Suite database objects between one release and another.  That's a very useful reference, but EBS defaults are delivered as seed data within the database objects themselves. What about the differences in this seed data between one release and another? I'm pleased to announce the availability of a new tool that provides comparison reports of E-Business Suite seed data between EBS 11.5.10.2, 12.0.4, 12.0.6, 12.1.1, and 12.1.3.  This new tool complements the information in the data model comparison tool.  You can download the new seed data comparison tool here: EBS ATG Seed Data Comparison Report (Note 1327399.1) The EBS ATG Seed Data Comparison Report provides report on the changes between different EBS releases based upon the seed data changes delivered by the product data loader files (.ldt extension) based on EBS ATG loader control (.lct extension) files.  You can use this new tool to report on the differences in the following types of seed data: Concurrent Program definitions Descriptive Flexfield entity definitions Application Object Library profile option definitions Application Object Library (AOL) key flexfield, function, lookups, value set definitions Application Object Library (AOL) menu and responsibility definitions Application Object Library messages Application Object Library request set definitions Application Object Library printer styles definitions Report Manager / WebADI component and integrator entity definitions Business Intelligence Publisher (BI Publisher) entity definitions BIS Request Set Generator entity definitions ... and more Your feedback is welcomeThis new tool was produced by our hard-working EBS Release Management team, and they're actively seeking your feedback.  Please feel free to share your experiences with it by posting a comment here.  You can also request enhancements to this tool via the distribution list address included in Note 1327399.1.Related Articles Oracle E-Business Suite Release 12.1.3 Now Available New Whitepaper: Upgrading EBS 11i Forms + OA Framework Personalizations to EBS 12 EBS 12.0 Minimum Requirements for Extended Support Finalized Five Key Resources for Upgrading to E-Business Suite Release 12 E-Business Suite Release 12.1.1 Consolidated Upgrade Patch 1 Now Available New Whitepaper: Planning Your E-Business Suite Upgrade from Release 11i to 12.1

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • Making file in user's homedir accessible from web/webserver

    - by evident
    Hi everybody, I have a txt-file one of my user's homedir which is regularly updated there by a script. I now want to be able to access (read) this file from the web. /home/user/folder/file.txt So what I tried now is to log in as root, go into my webservers httpdocs folder /var/www/path/to/domain/httpdocs and there I tried to create a symbolic link with ln -s /home/user/foler/file.txt /var/www/path/to/domain/httpdocs/file.txt But this didn't work... I already tried changing the chmod of the symlink (which changes the ones from the original file of course) and also a chown to the user from webserver, but no matter what I tried I cannot open the file from the web or from a php-script (which is what I want to do) Can anybody help me and tell me what I need to do? What rights do I need to give? Or is there another way of achieving this?

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 2)

    - by Hugo Kornelis
    In a previous blog post , I demonstrated just how much you can hurt your performance by encapsulating expressions and computations in a user-defined function (UDF). I focused on scalar functions that didn’t include any data access. In this post, I will complete the discussion on scalar UDFs by covering the effect of data access in a scalar UDF. Note that, like the previous post, this all applies to T-SQL user-defined functions only. SQL Server also supports CLR user-defined functions (written in...(read more)

    Read the article

  • PASS Business Intelligence Virtual Chapter Upcoming Sessions (November 2013)

    - by Sergio Govoni
    Let me point out the upcoming live events, dedicated to Business Intelligence with SQL Server, that PASS Business Intelligence Virtual Chapter has scheduled for November 2013. The "Accidental Business Intelligence Project Manager"Date: Thursday 7th November - 8:00 PM GMT / 3:00 PM EST / Noon PSTSpeaker: Jen StirrupURL: https://attendee.gotowebinar.com/register/5018337449405969666 You've watched the Apprentice with Donald Trump and Lord Alan Sugar. You know that the Project Manager is usually the one gets firedYou've heard that Business Intelligence projects are prone to failureYou know that a quick Bing search for "why do Business Intelligence projects fail?" produces a search result of 25 million hits!Despite all this… you're now Business Intelligence Project Manager – now what do you do?In this session, Jen will provide a "sparks from the anvil" series of steps and working practices in Business Intelligence Project Management. What about waterfall vs agile? What is a Gantt chart anyway? Is Microsoft Project your friend or a problematic aspect of being a BI PM? Jen will give you some ideas and insights that will help you set your BI project right: assess priorities, avoid conflict, empower the BI team and generally deliver the Business Intelligence project successfully! Dimensional Modelling Design Patterns: Beyond BasicsDate: Tuesday 12th November - Noon AEDT / 1:00 AM GMT / Monday 11th November 5:00 PM PSTSpeaker: Jason Horner, Josh Fennessy and friendsURL: https://attendee.gotowebinar.com/register/852881628115426561 This session will provide a deeper dive into the art of dimensional modeling. We will look at the different types of fact tables and dimension tables, how and when to use them. We will also some approaches to creating rich hierarchies that make reporting a snap. This session promises to be very interactive and engaging, bring your toughest Dimensional Modeling quandaries. Data Vault Data Warehouse ArchitectureDate: Tuesday 19th November - 4:00 PM PST / 7 PM EST / Wednesday 20th November 11:00 PM AEDTSpeaker: Jeff Renz and Leslie WeedURL: https://attendee.gotowebinar.com/register/1571569707028142849 Data vault is a compelling architecture for an enterprise data warehouse using SQL Server 2012. A well designed data vault data warehouse facilitates fast, efficient and maintainable data integration across business systems. In this session Leslie and I will review the basics about enterprise data warehouse design, introduce you to the data vault architecture and discuss how you can leverage new features of SQL Server 2012 help make your data warehouse solution provide maximum value to your users. 

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >