Search Results

Search found 17194 results on 688 pages for 'document databases'.

Page 424/688 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • Procurement: Troubleshooting Approval Hierarchy Issues

    - by Annemarie Provisero
    ADVISOR WEBCAST: Procurement: Troubleshooting Approval Hierarchy Issues PRODUCT FAMILY: EBS - Procurement November 29, 2011 at 7 am MST, 9 am EST, 2 pm London, 4 pm Cairo This one-hour session is recommended for technical and functional users who would like to know how Purchasing builds the approval list for a document. It also includes a troubleshooting section for cases where the list does not include the correct approvers or when workflow fails to build the approval list (no approver found). TOPICS WILL INCLUDE: Overview of Oracle Purchasing Approval Hierarchy, The Approval Methods. The Approval List. How to Troubleshoot and Diagnose Related Issues Demonstration A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • MSDN / TechNet Key Importer for KeePass 2

    - by Stacy Vicknair
    If you have an MSDN account and, like me, systematically claim keys just as well as you systematically forget which keys you’ve used in which test environments! Well, in a meager attempt to help myself track my keys I created an importer for KeePass 2 that takes in the XML document that you can export from MSDN and TechNet. The source is available at https://github.com/svickn/MicrosoftKeyImporterPlugin.   How do I get my KeysExport.xml from MSDN or TechNet? Easy! First, in MSDN, go to your product keys. From there, at the top right select Export to XML. This will let you download an XML file full of your Microsoft Keys.   How do I import it into KeePass 2? The instructions are simple and available in the GitHub ReadMe.md, so I won’t repeat them. Here is a screenshot of what the imported result looks like:   As you can see, the import process creates a group called Microsoft Product Keys and creates a subgroup for each product. The individual entries each represent an individual key, stored in the password field. The importer decides if a key is new based on the key stored in the password, so you can edit the notes or title for the individual entries however you please without worrying about them being overwritten or duplicated if you re-import an updated KeysExport.xml from MSDN! This lets you keep track of where those pesky keys are in use and have the keys available anywhere you can access your KeePass database!   Technorati Tags: KeePass,KeePass 2,MSDN,TechNet

    Read the article

  • Multi-Page PDF Banner/Poster from PDF

    - by Tim Lytle
    I'm looking for a utility that will take one large sized PDF, and split it into smaller PDFs for banner/poster printing. Looking for a linux or multi-platform solution. More Background My goal is to take an Inkscape document and generate a PDF, then print it on a printer that doesn't do banner/poster printing automatically - so if there's a better solution, I'd be happy to hear that as well. I've found exporting as a PNG both takes a while, and sometimes blends are not preserved. Printing as PDF (Ubuntu print-to-file) seems to work well. I've found utilities that can take large images formats and generate multipage PDFs, but not PDF to PDF.

    Read the article

  • After upgrading to 13.10, biblatex and biber are not compiling my references

    - by Lewelma
    I am working on a thesis using LaTeX, with my references relying on biblatex-apa. Ubuntu 13.04 provided all my LaTeX needs. But after upgrading to 13.10, the biblatex / biber combo will no longer compile my APA-style references. No other changes have been made to my documents or references -- and the rest of the document appears fine (albeit with broken references and no bibliography). I found reference to a possible cause -- which is that biblatex 1.7-1 is incompatible with texlive 2013 (as available through the 13.10 repositories) -- and that issue may be fixed by biblatex 2.7a-1 which has been committed upsteam in Debian. See: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=718244 However, that doesn't help me much, as I need to compile my references quite soon. How can I get my references to compile in the meantime? Is there a patched biblatex or biber that I can manually slot in place? Is the upstream fix on its way? or do I need to go to TexLive and do a replacement install directly (which is not my preference). Thanks!

    Read the article

  • How to represent an agile project to people focused on waterfall [closed]

    - by ahsteele
    Our team has been asked to represent our development efforts in a project plan. No one is unhappy with our work or questioning our ability to deliver, we are just participating in an IT cattle call for project plans. Trouble is we are an agile team and haven't thought about our work in terms of a formal project plan. While we have a general idea of what we are working on next we aren't 100% sure until we plan an iteration. Until now our team has largely operated in a vacuum and has not been required to present our methodology or metrics to outside parties. We follow most of the practices espoused in Extreme Programming. We hold quarterly planning meetings to have a general idea of the stories we are going to work on for a quarter. That said, our stories are documented on 3x5 cards and are only estimated at the beginning of the iteration in which they are going to be worked. After estimation we document the story in Team Foundation Sever. During an iteration, we attach code to stories and mark stories as completed once finished. From this data we are able to generate burn down and velocity charts. Most importantly we know our average velocity for an iteration keeping us from biting off more than we can chew. I am not looking to modify the way we do development but want to present our development activities in a report that someone only familiar with waterfall will understand. In What Does an Agile Project Plan Look Like, Kent McDonald does a good job laying out the differences between agile and waterfall project plans. He specifies the differences in consumable bullets: An agile project plan is feature based An Agile Project Plan is organized into iterations An Agile Project Plan has different levels of detail depending on the time frame An Agile Project Plan is owned by the Team Being able to explain the differences is great, but how best to present the data?

    Read the article

  • What should happen at the start of a software project startup?

    - by Willem
    A quick introduction My college semesters include a 8 week project working for an actual company with a software need in order to get some much needed practical experience. I have just started such a project with 5 other students. We're required to spend roughly 40 hours a week per student on this project. We're working with SCRUM as the software development method, this was assigned by our teachers. The question Day one of the project just ended which has created some questions for me as to how to start a project in the 'real world'. Our first day included working on a project planning document (not sure what the English term is), creating a appointment with the company for an introduction and the opportunity to start specifying the requirements and setting up some standards for the behavior within the group. However these items didn't take that long to finish. We've made some concrete plans for tomorrow and the day after we'll meet the company. This still leaves several hours of 'work-time' unspent. Is it usual not being able to fill every hour of a day for work at the start of a project or are we simply too inexperienced to see what work needs to be done at this stage of a project, or are we, perhaps, going through the above list too fast? How does this work in the 'real world'? Do you spend your time wondering 'what should I do now', or do you have a clear view of what you're supposed to do at that moment?

    Read the article

  • How to properly use windows xp mode

    - by user23950
    I just want to know how could I access the applications installed on xp mode in windows 7. I just installed the application in the default location in xp mode which is c:\program files Do I have to install it on the drive where windows 7 is installed so that I could access it quickly? Because I still have to wait for about 2-4 minutes just to open up a word document(ms office installed in xp mode) that is saved in my desktop in my physical machine. Please help. Details: 2Gb of Ram Pentium Dual Core processor 250GB of HDD

    Read the article

  • How to monetize and protect a engine's and its framework's copyrights and patents?

    - by Arthur Wulf White
    I created a game engine that handles: Rendering levels with 2d textured curved surfaces Collisions with curved surfaces Animationn paths on and navigation in 2d-sapce I have also made a framework for: Procedural organic level generation with round surfaces Level editing Light weight sprite design The engine and framework are written in AS3 and I am in the process of translating the code into HaXe to better support other platforms. I am also interested in adding Animated curved platforms More advanced level editing features Currently, I have a part time job and any time I spend on this engine is either taken out of my limited free time (I'm a student working to support myself through school) or out my time working at my job. I really believe this engine can make life much easier for people designing Tower Defence games, Shooters and and Platformers while also possibly improving their results. It could also support RTS, RPGs and racing games very well. It continains original algorithms that could be used for procedural generation of organic round and smooth levels. The algorithms I used are new and are not available in any other level editor I've seen. In order to constantly improve the Engine and have it tested thoroughly I think the best route is releasing it to the public. What are the best ways to benefit myself and others with my new framework? I want to have some lisence, allowing me to share the framework and still benefit from it. Any advice would be appreciated. This issue has been on my mind a lot this year. I am hoping to find a solution that will bring me some relief. I am thinking of designing three sample games, releasing them and starting a kickstarter, any advice and thoughts on the matter would be valuable. My goal is like Markus von Broady suggested, to get people involved in developing the engine and let people use it for games for either a symbolic fee or for free and charge for support. That or use some form of croud sourcing. Do I need to hire a lawyer to get some sort of legal document to protect my work?

    Read the article

  • How to SEO Optimize Javascript Image Loader?

    - by skibulk
    I am building an image-centric catalog website. It catalogs collectible gaming cards numbering 100,000+ pages. Competitor sites recieve millions of hits each month, so with the possibility of excessive traffic, I need to moderate image bandwidth while also optimizing for image SEO. I'm looking for some tips on doing so. Each page on the site features one card with appropriate tags and descriptions. There are however four images for each card - one on matte cardstock, one on foil cardstock, one digital, and one digital foil. In a world with unlimited bandwidth and no-wait page loads, I'd simply embed all four images on the main product page with titles, alt tags, and captions to rank them according to their version keyword. In reality a javascript gallery image loader seems appropriate. Here is a simplified example of my current code. Would this affect SEO in any way? Should I be doing anything differently? Note that I don't want to create a page for each image as I'd have to duplicate the card tags and descriptions on each one, diluting PR for the main page. Thanks for any insight! <script type="text/javascript"> document.write(' <img src="thumbnail1.jpg" data-src="version1.jpg"> <img src="thumbnail2.jpg" data-src="version2.jpg"> <img src="thumbnail3.jpg" data-src="version3.jpg"> <img src="thumbnail4.jpg" data-src="version4.jpg"> '); </script> <noscript> <img src="version1.jpg"> <img src="version2.jpg"> <img src="version3.jpg"> <img src="version4.jpg"> </noscript>

    Read the article

  • Direct IO enhancements in OVM Server for SPARC 2.2(a.k.a LDoms2.2)

    - by user12611315
    The Direct I/O feature has been available for LDoms customers since LDoms2.0. Apart from the latest SR-IOV feature in LDoms2.2, it is worth noting a few enhancements to the Direct I/O feature. These are: Support for Metis-Q and Metis-E cards. These cards are highly requested for support and are worth mentioning because they are the only combo cards containing both FibreChannel and Ethernet in the same card. With this support, a customer can have both SAN storage and network access with just one card and one PCIe slot assigned to a logical domain. This reduces cost and helps when there are less number of slots in a given platform. The following are the part numbers for these cards. I have tried to put the platforms on which each card is supported, but this information can get quickly outdated. The accurate information can be found at the Support Document.  Card Name  Part Number  Platforms Metis-Q: StorageTek Dual 8Gb Fibre Channel Dual GbE ExpressModule HBA, QLogic SG-XPCIEFCGBE-Q8-N  SPARC T3-4, T4-4 Metis-E: StorageTek Dual 8Gb Fibre Chanel Dual GbE ExpressModule HBA, Emulex SG-XPCIEFCGBE-E8-N SPARC T3-4, T4-4  Additional cards added to the portfolio of supported cards. This is mainly Powerville based Ethernet cards, the part numbers for these cards as below:  Part Number  Description  7100477 Sun Quad Port GbE PCI Express 2.0 Low Profile Adapter, UTP  7100481 Sun Dual Port GbE PCI Express 2.0 Low Profile Adapter, MMF  7100483 Sun Quad Port GbE PCI Express 2.0 ExpressModule, UTP  7110486 Sun Quad Port GbE PCI Express 2.0 ExpressModule, MMF    Note:  Direct IO feature has a hard dependency on the Root domain(PCIe bus owner, here Primary domain). That is, rebooting the Root domain for any reason may impact the logical domains having PCIe slots assigned with Direct IO feature. So rebooting a root domain need to be carefully managed. Also apply the failure-policy settings as described in the admin guide and release notes to deal with unexpected cases.

    Read the article

  • Webcast Q&A: ResCare Solves Content Lifecycle Challenges with Oracle WebCenter

    - by Kellsey Ruppel
    Last week we had the fourth webcast in our WebCenter in Action webcast series, "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter", where customer Joe Lichtefeld from ResCare and Wayne Boerger & Doug Thompson from Oracle Partner TEAM Informatics shared how Oracle WebCenter is powering allowing ResCare to solve content lifecycle challenges, reduce compliance and business risks, and increase adoption of intranet as primary business communication tool In case you missed it, here's a recap of the Q&A.   Joe Lichtefeld, ResCare  Q: Did you run into any issues in the deployment of the platform?A: We experienced very few issues when implementing the content management and search functionalities. There were some challenges in determining the metadata structure. We tried to find a fine balance between having enough fields to provide the functionality needed, but trying to limit the impact to the contributing members.  Q: What has been the biggest benefit your end users have seen?A: The biggest benefit to date is two-fold. Content on the intranet can be maintained by the individual contributors more timely than in our old process of all requests being updated by IT. The other big benefit is the ability to find the most current version of a document instead of relying on emails and phone calls to track down the "current" version. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: We experienced very little resistance. Most of our community groups were eager to be able to contribute and maintain their information. We had the normal hurdles of training and follow-up training with implementing a new system and process. As our second phase rolled out access to all employees, we have received more positive feedback on the accessibility of information. Wayne Boerger & Doug Thompson, TEAM Informatics Q: Can you integrate multiple repositories with the Google Search Appliance? Yes, the Google Search Appliance is designed to index lots of different repositories, from both public and internal sources. There are included connectors to many repositories, such as SharePoint, databases, file systems, LDAP, and with the TEAM GSA Connector and the Oracle Content Server. And the index for these repositories can be configured into different collections depending on the use cases that each customer has, and really, for each need within a customer environment. Q: How many different filters can you add when the search results are returned? A: Presuming this question is about the filtering on the search results. You can add as many filters as you like and it can be done by collection or any number of other criteria. Most importantly, customers now have the ability to limit the returned content by a set metadata value. Q: With the TEAM Sites Connector, what types of content can you sync? A: There’s really no limit; if it can be checked into the content server, then it is eligible for sync into Sites.  So basically, any digital file that has relevance to a Sites implementation can be checked into the WC Content central repository and then the connector can/will manage it. Q: Using the Connector, are there any limitations around where in Sites that synced content can be used? A: There are no limitations about where it can be used. When setting up your environment to use it, you just need to think through the different destinations on the Sites side that might use the content; that way you’ve got the right information to create the rules needed for the connector. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter from Oracle WebCenter

    Read the article

  • How do I create a Launcher in Ubuntu 9.10 that runs a shell script?

    - by mkelley33
    Here's my situation: New to Ubuntu (just installed 9.10 Karmic Koala 64 bit) Purpose: to easily run PyCharm without too much typing (ie. cd... ./pycharm.sh) Want to create desktop Launcher instead of terminal & typing (without resorting to the "Run in Terminal" option) Tried to create Launcher to executes .sh script in Document directory Right-clicked Desktop Create Launcher a. Type == Application; Browse [insert absolute path to .sh script]; no luck b. Type == Application in Terminal; Browse ...ditto I'm open to any other alternatives that involve as little typing as possible. I would like to just start Ubuntu, click Launcher icons, and have terminals spring to life, running the intended scripts. Crazy? No. Lazy? Probably. Productive? Hopefully :)

    Read the article

  • Technical Integration Roadmap for OBI11g and Oracle Hyperion EPM System

    - by Mike.Hallett(at)Oracle-BI&EPM
    There is an excellent technical whitepaper on the integration roadmap for Oracle business intelligence enterprise edition and the Oracle Hyperion enterprise performance management system  (download at this link).  This document lists the integration points among all current releases of Oracle BI EE with EPM System releases: with live links to other relevant documentation also provided. You may also be interested in the overall Hyperion EPM System Documentation Resources which can be found from the Doc Portal. And, there are two new tools for EPM @ MyOracleSupport  {this needs your oracle logon} : Cumulative Feature Overview Tool This new tool offers a simple way to determine the features developed between releases to assist you in your upgrade implementations. The tool helps you to plan your upgrades by providing concise descriptions of new and enhanced solutions and functionality that are added between your current and target releases. With the Cumulative Feature Overview Tool, you can quickly and easily find information about new features for each EPM System product. Defects Fixed Finder Tool This new tool provides an efficient way to review the defects fixed in patch set updates, patch set exceptions, and patch sets for major releases, starting with Release 11.1.1. The tool helps you plan patch implementations by providing concise descriptions of defects fixed after your current release. The Defects Fixed Finder enables you to easily find information about defects fixed for each EPM System product.

    Read the article

  • How to open a second instance of the pdf x-change viewer?

    - by rumtscho
    Every time I open a new document in the pdf x-change viewer, it gets opened in the same window. I want to place different documents on different places on the screen (independently of each other, not just tiling the window for side-by-side view), but cannot do it. Even going to the start menu and starting the reader again doesn't open a new instance of the application. Can I force the program to open a new instance or a new independent window? If this is impossible, which other free reader does what I need and also lets me make changes to the file? I don't need to edit the text itself, but I want to be able to add comments, underline and highlight text, and add some graphic elements (e.g. a circle or a freehand line).

    Read the article

  • Creating Parent-Child Relationships in SSRS

    - by Tim Murphy
    As I have been working on SQL Server Reporting Services reports the last couple of weeks I ran into a scenario where I needed to present a parent-child data layout.  It is rare that I have seen a report that was a simple tabular or matrix format and this report continued that trend.  I found that the processes for developing complex SSRS reports aren’t as commonly described as I would have thought.  Below I will layout the process that I went through to create a solution. I started with a List control which will contain the layout of the master (parent) information.  This allows for a main repeating report part.  The dataset for this report should include the data elements needed to be passed to the subreport as parameters.  As you can see the layout is simply text boxes that are bound to the dataset. The next step is to set a row group on the List row.  When the dialog appears select the field that you wish to group your report by.  A good example in this case would be the employee name or ID. Create a second report which becomes the subreport.  The example below has a matrix control.  Create the report as you would any parameter driven document by parameterizing the dataset. Add the subreport to the main report inside the row of the List control.  This can be accomplished by either dragging the report from the solution explorer or inserting a Subreport control and then setting the report name property. The last step is to set the parameters on the subreport.  In this case the subreport has EmpId and ReportYear as parameters.  While some of the documentation on this states that the dialog will automatically detect the child parameters, but this has not been my experience.  You must make sure that the names match exactly.  Tie the name of the parameter to either a field in the dataset or a parameter of the parent report. del.icio.us Tags: SQL Server Reporting Services,SSRS,SQL Server,Subreports

    Read the article

  • How to install and Configure MTA on Linux [closed]

    - by Umair Mustafa
    I need to know which MTA's is better and simple to handle and configure in linux. As I need to run a script that will send me the output of that command whenever it will run using cron. Ok the case is this. Every day I have to manually check the Disk space of server which are more than 30 which is headache and have to document that. So I will simply add the follwing command DF- H and the output of this command should be send on my email. So now IF u got the story then tell me what MTA is better sendmail, postfix and some instructions on HOW TO INSTALL and CONFIGURE it. And after configuring the How do I add the DF -H so that it will start seniding me the output on my email. Thanks in advance.

    Read the article

  • Utility to Script SQL Server Configuration

    - by Bill Graziano
    I wrote a small utility to script some key SQL Server configuration information. I had two goals for this utility: Assist with disaster recovery preparation Identify configuration changes I’ve released the application as open source through CodePlex. You can download it from CodePlex at the Script SQL Server Configuration project page. The application is a .NET 2.0 console application that uses SMO. It writes its output to a directory that you specify.  Disaster Planning ScriptSqlConfig generates scripts for logins, jobs and linked servers.  It writes the properties and configuration from the instance to text files. The scripts are designed so they can be run against a DR server in the case of a disaster. The properties and configuration will need to be manually compared. Each job is scripted to its own file. Each linked server is scripted to its own file. The linked servers don’t include the password if you use a SQL Server account to connect to the linked server. You’ll need to store those somewhere secure. All the logins are scripted to a single file. This file includes windows logins, SQL Server logins and any server role membership.  The SQL Server logins are scripted with the correct SID and hashed passwords. This means that when you create the login it will automatically match up to the users in the database and have the correct password. This is the only script that I programmatically generate rather than using SMO. The SQL Server configuration and properties are scripted to text files. These will need to be manually reviewed in the event of a disaster. Or you could DIFF them with the configuration on the new server. Configuration Changes These scripts and files are all designed to be checked into a version control system.  The scripts themselves don’t include any date specific information. In my environments I run this every night and check in the changes. I call the application once for each server and script each server to its own directory.  The process will delete any existing files before writing new ones. This solved the problem I had where the scripts for deleted jobs and linked servers would continue to show up.  To see any changes I just need to query the version control system to show many any changes to the files. Database Scripting Utilities that script database objects are plentiful.  CodePlex has at least a dozen of them including one I wrote years ago. The code is so easy to write it’s hard not to include that functionality. This functionality wasn’t high on my list because it’s included in a database backup.  Unless you specify the /nodb option, the utility will script out many user database objects. It will script one object per file. It will script tables, stored procedures, user-defined data types, views, triggers, table types and user-defined functions. I know there are more I need to add but haven’t gotten around it yet. If there’s something you need, please log an issue and get it added. Since it scripts one object per file these really aren’t appropriate to recreate an empty database. They are really good for checking into source control every night and then seeing what changed. I know everyone tells me all their database objects are in source control but a little extra insurance never hurts. Conclusion I hope this utility will help a few of you out there. My goal is to have it script all server objects that aren’t contained in user databases. This should help with configuration changes and especially disaster recovery.

    Read the article

  • Microsoft Office "Read-only" warning not appearing on Samba shares on Mac OS X Server

    - by bongo
    Hi, some of my users don't get the "read-only" warning (but "read-only" does appear on the title bar and the document is indeed opened read-only) when opening Office 2007 documents already opened by another user. We run the samba share off an XSan volume under Mac OS X 10.5.8 Server. Strict locking is on but oplocks are off (from Server Admin). At home, with a simple samba share in 10.6.3 server, it works correctly. Any ideas? or is this a 10.5.8 behavior?

    Read the article

  • KISS principle applied to programming language design?

    - by Giorgio
    KISS ("keep it simple stupid", see e.g. here) is an important principle in software development, even though it apparently originated in engineering. Citing from the wikipedia article: The principle is best exemplified by the story of Johnson handing a team of design engineers a handful of tools, with the challenge that the jet aircraft they were designing must be repairable by an average mechanic in the field under combat conditions with only these tools. Hence, the 'stupid' refers to the relationship between the way things break and the sophistication available to fix them. If I wanted to apply this to the field of software development I would replace "jet aircraft" with "piece of software", "average mechanic" with "average developer" and "under combat conditions" with "under the expected software development / maintenance conditions" (deadlines, time constraints, meetings / interruptions, available tools, and so on). So it is a commonly accepted idea that one should try to keep a piece of software simple stupid so that it easy to work on it later. But can the KISS principle be applied also to programming language design? Do you know of any programming languages that have been designed specifically with this principle in mind, i.e. to "allow an average programmer under average working conditions to write and maintain as much code as possible with the least cognitive effort"? If you cite any specific language it would be great if you could add a link to some document in which this intent is clearly expressed by the language designers. In any case, I would be interested to learn about the designers' (documented) intentions rather than your personal opinion about a particular programming language.

    Read the article

  • Drupal + Lighttpd: enabling clean urls (rewriting)

    - by Patrick
    I'm emulating Ubuntu on my mac, and I use it as a server. I've installed lighttpd + Drupal and the following configuration section requires a domain name in order to make clean urls to work. Since I'm using a local server I don't have a domain name and I was wondering how to make it work given the fact the ip of the local machine is usually changing. thanks $HTTP["host"] =~ "(^|\.)mywebsite\.com" { server.document-root = "/var/www/sites/mywebsite" server.errorlog = "/var/log/lighttpd/mywebsite/error.log" server.name = "mywebsite.com" accesslog.filename = "/var/log/lighttpd/mywebsite/access.log" include_shell "./drupal-lua-conf.sh mywebsite.com" url.access-deny += ( "~", ".inc", ".engine", ".install", ".info", ".module", ".sh", "sql", ".theme", ".tpl.php", ".xtmpl", "Entries", "Repository", "Root" ) # "Fix" for Drupal SA-2006-006, requires lighttpd 1.4.13 or above # Only serve .php files of the drupal base directory $HTTP["url"] =~ "^/.*/.*\.php$" { fastcgi.server = () url.access-deny = ("") } magnet.attract-physical-path-to = ("/etc/lighttpd/drupal-lua-scripts/p-.lua") }

    Read the article

  • BI-Applications Special Price Promotion for Partners

    - by Mike.Hallett(at)Oracle-BI&EPM
    Partners should keep in mind the “Midsize Market” pricing promotion for BI-Applications solution packages, with reduced minimums applicable to Oracle's Business Intelligence Products, and a pre-approved 50% discount. ·       Partners additionally get their normal e-business reseller discount. This now makes it most attractive to offer the pre-built BI-Applications such as Manufacturing Analytics, Financial Analytics, Procurement and Spend Analytics, Project Analytics, and Human Resources Analytics, to both customers newly implementing Oracle ERP, and for the many existing Oracle ERP (eBusiness suite, Peoplesoft and JDE) customers. To answer any questions, and to get the partner document with further details of this offer, or to work with us on our local sales campaigns targeting existing ERP customers, please send your query to [email protected] or [email protected]: or discuss it with your local Oracle Sales or Channel representative for Applications to Midsize Enterprises.  This promotion is ONLY for End Customers whose organisations have an Annual Revenue (or Public Sector Budget) below $500 million, and who are based in Europe, the Middle East or Africa. For more information see the orginal article, “New fy13 BI-Applications Price Promotion for MIDSIZE CUSTOMERS”  and send your query to [email protected].

    Read the article

  • jQuery fading in an element - not working exactly as i want it to...

    - by Nike
    Anybody see what's wrong? Doesn't seem to do anything. If i replace $(this, '.inner').stop(true, false).fadeIn(250); with $(.fadeInOnHover .inner').stop(true, false).fadeIn(250); then all the .inner elements on the page will fade in (which isn't really what i want, as i have ~10 of them). I know it's possible to achieve what i want to do, but i don't know how in this case. Thanks in advance :) <script type="text/javascript"> $(document).ready(function() { $('.fadeInOnHover .inner').css("display","none"); $('.fadeInOnHover').hover(function() { $(this, '.inner').stop(true, false).fadeIn(250); }).mouseout(function() { $(this, '.inner').stop(true, true).fadeOut(100); }); }); </script>

    Read the article

  • disable 250 character URL limit in Internet Explorer

    - by Keltari
    Users of a SharePoint Document Library are getting this error: The URL for this file is too long for the application. A temporary copy of this file will be opened on your computer. You must save this copy as a new file. After doing some research, it appears Internet Explorer has a limit of about ~250 characters for a URL. Some URLs provided by SharePoint far exceed this limit. One example being 790 characters long. Is there a way to disable this limit? I have looked, but there doesnt appear to be a solution, other than shortening the folder/path names.

    Read the article

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • SubMain Ghost Doc Pro with SpellChecking

    - by TATWORTH
    SubMain have announced at http://community.submain.com/forums/2/1556/ShowThread.aspx#1556 that the next version of GhostDoc will include a VS2005/VS2008/VS2010 compatible spell checker. This replaces their existing spellchecker (http://submain.com/products/codespell.aspx)  which is being discontinued. If you buy GhostDoc Pro now (I urge you to as it helps tremendously in documenting both C# and VB.NET code) , be sure to include Licence Protection as it means you will get the next version that includes the spell-checker free! Why is a spell checker important? By spell checking all your comments, you will make your documentation much easier to read. This means that instead of you being distracted by typographic errors, your mind will be free to see errors in what has been written. Remember the next person that has to struggle to read your code could well be yourself! So be kind to your self. Do the following: Document whole source files in VB.NET of C# with GhostDoc Pro Run Stylecop and fix the issues it uncovers. Run the spellchecker (when it is available) Add remarks where necessary Specify in the project to produce XML documentation Compile the XML using Sandcastle to help files Review the help files and ask yourself if the explanations are sufficient.

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >