Search Results

Search found 105727 results on 4230 pages for 'oracle user tips'.

Page 259/4230 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • SQL SERVER – Get Date and Time From Current DateTime – SQL in Sixty Seconds #025 – Video

    - by pinaldave
    This is 25th video of series SQL in Sixty Seconds we started a few months ago. Even though this is 25th video it seems like we have just started this few days ago. The best part of this SQL in Sixty Seconds is that one can learn something new in less than sixty seconds. There are many concepts which are not new for many but just we all have 60 seconds to refresh our memories. In this video I have touched a very simple question which I receive very frequently on this blog. Q1) How to get current date time? Q2) How to get Only Date from datetime? Q3) How to get Only Time from datetime? I have created a sixty second video on this subject and hopefully this will help many beginners in the SQL Server field. This sixty second video describes the same. Here is a similar script which I have used in the video. SELECT GETDATE() GO -- SQL Server 2000/2005 SELECT CONVERT(VARCHAR(8),GETDATE(),108) AS HourMinuteSecond, CONVERT(VARCHAR(8),GETDATE(),101) AS DateOnly; GO -- SQL Server 2008 Onwards SELECT CONVERT(TIME,GETDATE()) AS HourMinuteSeconds; SELECT CONVERT(DATE,GETDATE()) AS DateOnly; GO Related Tips in SQL in Sixty Seconds: Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Get Time in Hour:Minute Format from a Datetime – Get Date Part Only from Datetime Get Current System Date Time Get Date Time in Any Format – UDF – User Defined Functions Date and Time Functions – EOMONTH() – A Quick Introduction DATE and TIME in SQL Server 2008 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Image Credit: Movie Gone in 60 Seconds Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Copy Column Headers from Resultset – SQL in Sixty Seconds #027 – Video

    - by pinaldave
    SQL Server Management Studio returns results in Grid View, Text View and to the file. When we copy results from Grid View to Excel there is a common complaint that the column  header displayed in resultset is not copied to the Excel. I often spend time in performance tuning databases and I run many DMV’s in SSMS to get a quick view of the server. In my case it is almost certain that I need all the time column headers when I copy my data to excel or any other place. SQL Server Management Studio have two different ways to do this. Method 1: Ad-hoc When result is rendered you can right click on the resultset and click on Copy Header. This will copy the headers along with the resultset. Additionally, you can use the shortcut key CTRL+SHIFT+C for coping column headers along with the resultset. Method 2: Option Setting at SSMS level This is SSMS level settings and I kept this option always selected as I often need the column headers when I select the resultset. Go Tools >> Options >> Query Results >> SQL Server >> Results to Grid >> Check the Box “Include column header when copying or saving the results.” Both of the methods are discussed in following SQL in Sixty Seconds Video. Here is the code used in the video. Related Tips in SQL in Sixty Seconds: Copy Column Headers in Query Analyzers in Result Set Getting Columns Headers without Result Data – SET FMTONLY ON If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • DRM Tallyrand - The New User Interface

    - by russ.bishop
    I received word recently that the Tallyrand (11.1.2.0) build is out of our hands. I'm not sure when it will hit eDelivery, but if it hasn't already it should happen soon. For this post, I want to really quickly show the new user interface. The login screen: When you login, you are browsing versions and hierarchies. Note that Unicode is fully supported: The UI attempts to provide context-sensitive links where possible; notice here that an unloaded version is selected, so the UI shows a link. Clicking the link automatically brings up this Load Version dialog. This same thing applies elsewhere in the UI when you attempt to perform an action with an unloaded version: Here is browsing a hierarchy, with the property grid and context menu displayed (though you can hide the property grid anytime you like to provide more room): Worried about drag and drop? Don't! We support it even though this is a browser app. Also notice the Relationships feature on the right displaying a node's ancestors: Where possible, we try to present the available options, rather than just throwing up an "OK/Cancel" dialog (which most users never read anyway): Context-sensitive shortcuts automatically fill-in the context based on the currently selected node. For example, if you want to run a query using the selected node as the root, you can just click that query in the Shortcuts tab. In this screenshot, clicking Model After would model the selected node: This is just for starters. There is much more to cover, on both the client and server. For example, all communication channels are now configurable (no more DCOM). You can pick the ports, the encoding (binary or XML), and the transport mechanism (TCP, TCP over SSL, or SOAP over HTTP). All the relevant WS-* standards are also supported, eg: WS-Security, etc. Plus new features (besides the web client and unicode support). I hope to cover as much of these things as I can in the coming months. If you have specific requests, comment on this post and I'll try to cover them.

    Read the article

  • Oracle SQL Developer v3.2.1 Now Available

    - by thatjeffsmith
    Oracle SQL Developer version 3.2.1 is now available. I recommend that everyone now upgrade to this release. It features more than 200 bug fixes, tweaks, and polish applied to the 3.2 edition. The high profile bug fixes submitted by customers and users on our forums are listed in all their glory for your review. I want to highlight a few of the changes though, as I recognize many of you lack the time and/or patience to ‘read the docs.’ That would include me, which is why I enjoy writing these kinds of blog posts. I’m lazy – just like you! No more artificial line breaks between CREATE OR REPLACE and your PL/SQL In versions 3.2 and older, when you pull up your stored procedural objects in our editor, you would see a line break inserted between the CREATE OR REPLACE and then the body of your code. In version 3.2.1, we have removed the line break. 3.1 3.2.1 Trivia Did You Know? The database doesn’t store the ‘CREATE’ or ‘CREATE OR REPLACE’ bit of your PL/SQL code in the database. If we look at the USER_SOURCE view, we can see that the code begins with the object name. So the CREATE OR REPLACE bit is ‘artificial’ The intent is to give you the code necessary to recreate your object – and have it ‘compile’ into the database. We pretty much HAVE to add the ‘CREATE OR REPLACE.’ From now on it will appear inline with the first line of your code. Exporting Tables & Views When exporting data from your tables or views, previous versions of SQL Developer presented a 3 step wizard. It allows you to choose your columns and apply data filters for what is exported. This was kind of redundant. The grids already allowed you to select your columns and apply filters. Wouldn’t it be more intuitive AND efficient to just make the grids behave in a What You See Is What You Get (WYSIWYG) fashion? In version 3.2.1, that is exactly what will happen. The wizard now only has two steps and the grid will export the data and columns as defined in the visible grid. Let the grid properties define what is actually exported! And here is what is pasted into my worksheet: "BREWERY"|"CITY" "3 Brewers Restaurant Micro-Brewery"|"Toronto" "Amsterdam Brewing Co."|"Toronto" "Ball Brewing Company Ltd."|"Toronto" "Big Ram Brewing Company"|"Toronto" "Black Creek Historic Brewery"|"Toronto" "Black Oak Brewing"|"Toronto" "C'est What?"|"Toronto" "Cool Beer Brewing Company"|"Toronto" "Denison's Brewing"|"Toronto" "Duggan's Brewery"|"Toronto" "Feathers"|"Toronto" "Fermentations! - Danforth"|"Toronto" "Fermentations! - Mount Pleasant"|"Toronto" "Granite Brewery & Restaurant"|"Toronto" "Labatt's Breweries of Canada"|"Toronto" "Mill Street Brew Pub"|"Toronto" "Mill Street Brewery"|"Toronto" "Molson Breweries of Canada"|"Toronto" "Molson Brewery at Air Canada Centre"|"Toronto" "Pioneer Brewery Ltd."|"Toronto" "Post-Production Bistro"|"Toronto" "Rotterdam Brewing"|"Toronto" "Steam Whistle Brewing"|"Toronto" "Strand Brasserie"|"Toronto" "Upper Canada Brewing"|"Toronto" JUST what I wanted And One Last Thing Speaking of export, sometimes I want to send data to Excel. And sometimes I want to send multiple objects to Excel – to a single Excel file that is. In version 3.2.1 you can now do that. Let’s export the bulk of the HR schema to Excel, with each table going to it’s own workbook in the same worksheet. Select many tables, put them in in a single Excel worksheet If you try this in previous versions of SQL Developer it will just write the first table to the Excel file. This is one of the bugs we addressed in v3.2.1. Here is what the output Excel file looks like now: Many tables - Many workbooks in an Excel Worksheet I have a sneaky suspicion that this will be a frequently used feature going forward. Excel seems to be the cornerstone of many of our popular features. Imagine that!

    Read the article

  • SQL SERVER – Three Methods to Insert Multiple Rows into Single Table – SQL in Sixty Seconds #024 – Video

    - by pinaldave
    One of the biggest ask I have always received from developers is that if there is any way to insert multiple rows into a single table in a single statement. Currently when developers have to insert any value into the table they have to write multiple insert statements. First of all this is not only boring it is also very much time consuming as well. Additionally, one has to repeat the same syntax so many times that the word boring becomes an understatement. In the following quick video we have demonstrated three different methods to insert multiple values into a single table. -- Insert Multiple Values into SQL Server CREATE TABLE #SQLAuthority (ID INT, Value VARCHAR(100)); Method 1: Traditional Method of INSERT…VALUE -- Method 1 - Traditional Insert INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'); INSERT INTO #SQLAuthority (ID, Value) VALUES (2, 'Second'); INSERT INTO #SQLAuthority (ID, Value) VALUES (3, 'Third'); Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 2: INSERT…SELECT -- Method 2 - Select Union Insert INSERT INTO #SQLAuthority (ID, Value) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third'; Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 3: SQL Server 2008+ Row Construction -- Method 3 - SQL Server 2008+ Row Construction INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'), (2, 'Second'), (3, 'Third'); Clean up -- Clean up DROP TABLE #SQLAuthority; Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Multiple Records Using One Insert Statement – Use of UNION ALL SQL SERVER – 2008 – Insert Multiple Records Using One Insert Statement – Use of Row Constructor I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Copy Column Headers from Resultset – SQL in Sixty Seconds #026 – Video

    - by pinaldave
    SQL Server Management Studio returns results in Grid View, Text View and to the file. When we copy results from Grid View to Excel there is a common complaint that the column  header displayed in resultset is not copied to the Excel. I often spend time in performance tuning databases and I run many DMV’s in SSMS to get a quick view of the server. In my case it is almost certain that I need all the time column headers when I copy my data to excel or any other place. SQL Server Management Studio have two different ways to do this. Method 1: Ad-hoc When result is rendered you can right click on the resultset and click on Copy Header. This will copy the headers along with the resultset. Additionally, you can use the shortcut key CTRL+SHIFT+C for coping column headers along with the resultset. Method 2: Option Setting at SSMS level This is SSMS level settings and I kept this option always selected as I often need the column headers when I select the resultset. Go Tools >> Options >> Query Results >> SQL Server >> Results to Grid >> Check the Box “Include column header when copying or saving the results.” Both of the methods are discussed in following SQL in Sixty Seconds Video. Here is the code used in the video. Related Tips in SQL in Sixty Seconds: Copy Column Headers in Query Analyzers in Result Set Getting Columns Headers without Result Data – SET FMTONLY ON If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Create a Shortcut To Group Policy Editor in Windows 7

    - by Mysticgeek
    If you’re a system administrator and find yourself making changes in Group Policy Editor, you might want to make a shortcut to it. Here we look at creating a shortcut, pinning it to the Taskbar, and adding it to Control Panel. Note: Local Group Policy Editor is not available in Home versions of Windows 7. Typing gpedit.msc into the search box in the Start menu to access Group Policy Editor can get old fast. To create a shortcut, right-click on the desktop and select New \ Shortcut. Next type or copy the following path into the location field and click Next. c:\windows\system32\gpedit.msc Then give your shortcut a name…something like Group Policy, or whatever you want it to be and click Finish. Now you have your Group Policy shortcut… If you want it on the Taskbar just drag it there to pin it. And that’s all there is to it!   If you want to change the icon, you can use one of the following guides… Customize Icons in Windows 7 Change a File Type Icon in Windows 7 Add Group Policy to Control Panel If you’re using non Home versions of XP, Vista, or Windows 7, check out The Geek’s article on how to Add Group Policy Editor to Control Panel. Similar Articles Productive Geek Tips Add Group Policy Editor to Control PanelQuick Tip: Disable Search History Display in Windows 7Remove Shutdown and Restart Buttons In Windows 7How To Disable Control Panel in Windows 7Allow Users To Run Only Specified Programs in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • DevDays ‘00 The Netherlands day #2

    - by erwin21
    Day 2 of DevDays 2010 and again 5 interesting sessions at the World Forum in The Hague. The first session of the today in the big world forum theater was from Scott Hanselman, he gives a lap around .NET 4.0. In his way of presenting he talked about all kind of new features of .NET 4.0 like MEF, threading, parallel processing, changes and additions to the CLR and DLR, WPF and all new language features of .NET 4.0. After a small break it was ready for session 2 from Scott Allen about Tips, Tricks and Optimizations of LINQ. He talked about lazy and deferred executions, the difference between IQueryable and IEnumerable and the two flavors of LINQ syntax. The lunch was again very good prepared and delicious, but after that it was time for session 3 Web Vulnerabilities and Exploits from Alex Thissen. This was no normal session but more like a workshop, we decided what kind of subjects we discussed, the subjects where OWASP, XSS and other injections, validation, encoding. He gave some handy tips and tricks how to prevent such attacks. Session 4 was about the new features of C# 4.0 from Alex van Beek. He talked about Optional- en Named Parameters, Generic Co- en Contra Variance, Dynamic keyword and COM Interop features. He showed how to use them but also when not to use them. The last session of today and also the last session of DevDays 2010 was about WCF Best Practices from Gerben van Loon. He talked about 7 best practices that you must know when you are going to use WCF. With some quick demos he showed the problem and the solution for some common issues. It where two interesting days and next year i sure will be attending again.

    Read the article

  • OS Analytics with Oracle Enterprise Manager (by Eran Steiner)

    - by Zeynep Koch
    Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. The recording of our call to discuss this blog is available here: https://oracleconferencing.webex.com/oracleconferencing/ldr.php?AT=pb&SP=MC&rID=71517797&rKey=4ec9d4a3508564b3Download the presentation here See also: Blog about Alert Monitoring and Problem Notification Blog about Using Operational Profiles to Install Packages and other content Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data Drill down into a process details Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • SQL SERVER – Select Columns from Stored Procedure Resultset

    - by Pinal Dave
    It is fun to go back to basics often. Here is the one classic question: “How to select columns from Stored Procedure Resultset?” Though Stored Procedure has been introduced many years ago, the question about retrieving columns from Stored Procedure is still very popular with beginners. Let us see the solution in quick steps. First we will create a sample stored procedure. CREATE PROCEDURE SampleSP AS SELECT 1 AS Col1, 2 AS Col2 UNION SELECT 11, 22 GO Now we will create a table where we will temporarily store the result set of stored procedures. We will be using INSERT INTO and EXEC command to retrieve the values and insert into temporary table. CREATE TABLE #TempTable (Col1 INT, Col2 INT) GO INSERT INTO #TempTable EXEC SampleSP GO Next we will retrieve our data from stored procedure. SELECT * FROM #TempTable GO Finally we will clean up all the objects which we have created. DROP TABLE #TempTable DROP PROCEDURE SampleSP GO Let me know if you want me to share such back to basic tips. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, T SQL

    Read the article

  • Stop Your Mouse from Waking Up Your Windows 7 Computer

    - by The Geek
    If you use Sleep Mode on your PC, have you ever noticed that moving your mouse will wake the computer from sleep mode? If you would prefer to only have the PC wake up when you hit a key instead, there’s a simple tweak. Just type Mouse into the start menu search box, or the Control Panel search box, and then open up the Mouse Properties panel. Find the Hardware tab, select your mouse in the list, and then click the Properties button. You’ll have to click the “Change settings” button before you can see the Power Management tab… And now, you can uncheck the box from “Allow this device to wake the computer”. That’s all there is to it. Similar Articles Productive Geek Tips Stop the Mouse From Waking Up Your Computer from Sleep ModeFix "Sleep Mode Randomly Waking Up" Issue in Windows VistaTemporarily Disable Windows Update’s Automatic Reboot in Win7 or VistaDisable Aero Snap (the Mouse Drag Window Arranging Feature in Windows 7)New Year’s Resolutions: Use Your Computer as an Alarm Clock the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Date Tracking in Oracle HRMS

    - by Manoj Madhusoodanan
    Update Date Track Modes To maintain employee data effectively Oracle HCM is using a mechanism called date tracking.The main motive behind the date track mode is to maintain past,present and future data effectively.The various update date track modes are: CORRECTION : Over writes the data. No history will maintain.UPDATE : Keeps the history and new change will effect as of effective dateUPDATE_CHANGE_INSERT : Inserts the record and preserves the futureUPDATE_OVERRIDE : Inserts the record and overrides the future Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Action: Created Employee # 22 on 01-JAN-2012 The record in PER_ALL_PEOPLE_F is as shown below. Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-DEC-4712 24 2 Action: Updated record in CORRECTION mode Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-DEC-4712 24 Single 3 Action: Updated record in UPDATE mode effective 01-JUN-2012 and Marital Status = Married Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-MAY-2012 24 Single 4 01-JUN-2012 31-DEC-4712 24 Married 5 Action: Updated record in UPDATE mode effective 01-SEP-2012 and Marital Status = Divorced Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 31-MAY-2012 24 Single 4 01-JUN-2012 31-AUG-2012 24 Married 6 01-SEP-2012 31-DEC-4712 24 Divorced 7 Action: Updated record in UPDATE_CHANGE_INSERT mode effective 01-MAR-2012 and Marital Status = Living Together Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 29-FEB-2012 24 Single 8 01-MAR-2012 31-MAY-2012 24 Living Together 9 01-JUN-2012 31-AUG-2012 24 Married 6 01-SEP-2012 31-DEC-4712 24 Divorced 7 Action: Updated record in UPDATE_OVERRIDE mode effective 01-AUG-2012 and Marital Status = Divorced Effective Start Date Effective End Date Employee Number Marital Status Object Version Number 01-JAN-2012 29-FEB-2012 24 Single 8 01-MAR-2012 31-MAY-2012 24 Living Together 9 01-JUN-2012 31-JUL-2012 24 Married 10 01-AUG-2012 31-DEC-4712 24 Divorced 11  Delete Date Track Modes The various delete date track modes are ZAP : wipes all recordsDELETE : Deletes  current recordFUTURE_CHANGE : Deletes current and future changes.DELETE_NEXT_CHANGE : Deletes next change Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Element Entry records are shown below. Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 12-OCT-2012 129831 3 13-OCT-2012 19-OCT-2012 129831 5 20-OCT-2012 31-DEC-4712 129831 6 Action: Delete record in ZAP mode effective 14-JAN-2012 No rows Action: Delete record in DELETE mode effective 14-OCT-2012 Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 12-OCT-2012 129831 3 13-OCT-2012 14-OCT-2012 129831 6 Action: Delete record in FUTURE_CHANGE mode effective 14-JAN-2012 Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 31-DEC-4712 129831 4 Action: Delete record in NEXT_CHANGE mode effective 14-JAN-2012 Effective Start Date Effective End Date Element Entry Id Object Version Number 01-JAN-2012 19-OCT-2012 129831 4 20-OCT-2012 31-DEC-4712 129831 6

    Read the article

  • ERROR: Not enough space?

    - by dsmoljanovic
    Now this is a very unspecific question. I'm trying to figure out what this message would mean. Here is the story behind it: I'm installing Oracle enterprise manager cloud control (12c r3) on Solaris 10 (5/09). Installer opens up, i enter all needed information and at the last step click Install. It immediately crashes with only "ERROR: Not enough space" written in log and console and nothing else. Now, this could be java error or Solaris error? I'm thinking it's happening either when it starts to copy files or when it tries to launch a process that would do that. What space is it referring to? disk (have ehough), swap (also), memory (yep)... Any ideas are helpful. Edit: i found this exception in the oraInventory logs: oracle.sysman.oii.oiic.OiicInstallAPIException: Not enough space at oracle.sysman.oii.oiic.OiicAPIInstaller.initInstallSession(OiicAPIInstaller.java:2165) at oracle.sysman.oii.oiic.OiicAPIInstaller.initOUIAPISession(OiicAPIInstaller.java:790) at oracle.sysman.install.oneclick.EMGCOUIInstaller.prepareForInstall(EMGCOUIInstaller.java:676) at oracle.sysman.install.oneclick.EMGCSummaryDlgonNext$1.run(EMGCSummaryDlgonNext.java:243) at java.lang.Thread.run(Thread.java:662) at oracle.sysman.install.oneclick.EMGCSummaryDlgonNext.actionsOnClickofNext(EMGCSummaryDlgonNext.java:1067) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at oracle.sysman.install.oneclick.EMGCUtil.performonClickOfNextForClass(EMGCUtil.java:399) at oracle.sysman.install.oneclick.EMGCUtil.performPageLevelValidationsForSilentInstall(EMGCUtil.java:367) at oracle.sysman.install.oneclick.EMGCInstaller.prepareForSilentInstall(EMGCInstaller.java:1459) at oracle.sysman.install.oneclick.EMGCInstaller.main(EMGCInstaller.java:1553) disk status: bash-3.00$ df -h /tmp Filesystem size used avail capacity Mounted on swap 8.1G 2.7G 5.4G 33% /tmp bash-3.00$ df -h /u01 Filesystem size used avail capacity Mounted on / 275G 28G 244G 11% / swap: root@gs12emcc # swap -s total: 18306040k bytes allocated + 3837808k reserved = 22143848k used, 5712664k available

    Read the article

  • Il CRM è al passo con i tempi?

    - by antonella.buonagurio(at)oracle.com
    Il Social Customer Relationship Management è nato grazie alla rivoluzione portata dal Web 2.0, un cambiamento epocale nelle modalità di comunicazione che ha aggiunto una incredibile ricchezza alle conversazioni tra aziende e consumatori. Le aziende dispongono adesso di strumenti per comprendere il proprio mercato senza precedenti, i consumatori, a loro volta, hanno il potere di utilizzare nuovi canali per esprimere le proprie esigenze e per comunicare e condividere commenti ed esperienze. Ma il Web 2.0 non è il solo fattore che impatta sulle scelte strategiche in ambito CRM  che ogni azienda deve considerare per sostenere  questo nuovo rapporto con i propri consumatori.    Vuoi scoprire quali sono le forze (o fattori) che le aziende devono considerare affinchè i processi di gestione della relazione con i clienti stiano al passo con le mutate condizioni sociali ed economiche?   Per saperne di più:   Il whitepaper realizzato da Oracle, Paul Gillin ed  IT Business Edge  ne delinea alcuni: 1.      Il Business. Come è cambiato in funzione dell'esperienza multicanale ora possible, della centralità del cliente e dei social networking che dominano le relazioni on line? 2.      La tecnologiaLe aziende oggi per guadagnare vantaggio competitivo devono dotarsi delle più innovative tecnologie per dare maggior valore al proprio business e per ridurre al minimo i costi di infrastruttura. Quali sono e quali sono gli effettivi vantaggi?   e altri ancora ...... leggendo il white paper "Is your CRM solution keeping up with the times?"

    Read the article

  • The Importance of Collaboration, Analytics, and Mobile Technologies for Modern HR

    - by HCM-Oracle
    It was 17 years ago, when a McKinsey study uncovered the “war for talent”. Today, it is no point of contention that a strong talent-centric strategy maybe the most important focus for organizations. A talent-centric organization aims at recruiting, retaining and developing the best talent.  The best employees will be able to adapt responsibilities and be able to come up with solutions to solve problems, which are important skills in today’s dynamic work environment, and arguably more important in this recessionary climate.   The notion of hiring and retaining talented employees for organizational sustainability and competitive advantage is not a new concept. But can organizations consider themselves as having a “talent-centric” strategy without up-to-date collaboration tools, HR analytics and mobile technologies in pursuit of attracting, hiring and retaining the best talent? Attend the Upcoming Webcast A webcast on June 19th at 3pm EST will reveal more results of the study. Based on original research done in collaboration between Oracle HCM and HCI, we unveil new findings that explore how critical collaboration, analytic insights and mobile technology are for supporting a talent-centric work environment. You will learn: What are the benefits to being talent-centric? How does collaboration via social networks, analytics with predictive insights and mobile technologies support the talent-centric strategy of an organization? What is the state of play for these technologies? Register Here 

    Read the article

  • Come integrare in modo smart processi di vendita e produzione?

    - by Claudia Caramelli-Oracle
    L’innovazione tecnologica ha trasformato il modo in cui i clienti interagiscono con le aziende. Inoltre, gli attuali scenari di mercato richiedono attenzione ed efficacia nella vendita per mantenere massima competitività. Per ottenere le migliori performance di vendita è necessario accelerare e automatizzare i processi di scambio informazioni tra i dipartimenti commerciali e produttivi, minimizzando tempi di attesa per ottenere dati tecnici e autorizzazioni alla fattibilità, riducendo i colli di bottiglia e i possibili errori umani attraverso un processo di controllo e omologazione dell’offerta.Gli sponsor dell’evento ti attendono l'11 giugno presso la prestigiosa sede dell’Unione Industriale di Torino per scoprire come: Ridurre il ciclo di vendita, facendo efficienza sull’intero processo di vendita Minimizzare gli impatti da turnover del personale di vendita Migliorare il value to promise Ottenere una migliore fidelizzazione e soddisfazione dei propri clienti, riducendone lo switching Assistere dal vivo ad una dimostrazione pratica di Oracle, leader mondiale nell’ambito delle soluzioni di CPQ (Configure, Price and Quoting) nell’utilizzo di uno strumento veloce, facile da utilizzare, che permetta una gestione smart della configurazione commerciale dell’offerta B2B anche con l’ausilio di accesso mobile e cruscotti direzionali. Scoprire come altre aziende abbiano adottato con successo queste soluzioni di business. La partecipazione all'evento è gratuita ma con capienza limitata, iscriviti subito per assicurarti la partecipazione: CLICCA QUI per registrarti. Se hai bisogno di maggiori informazioni scrivi a Silvia Valgoi.

    Read the article

  • EBS Seed Data Comparison Reports Now Available

    - by Steven Chan (Oracle Development)
    Earlier this year we released a reporting tool that reports on the differences in E-Business Suite database objects between one release and another.  That's a very useful reference, but EBS defaults are delivered as seed data within the database objects themselves. What about the differences in this seed data between one release and another? I'm pleased to announce the availability of a new tool that provides comparison reports of E-Business Suite seed data between EBS 11.5.10.2, 12.0.4, 12.0.6, 12.1.1, and 12.1.3.  This new tool complements the information in the data model comparison tool.  You can download the new seed data comparison tool here: EBS ATG Seed Data Comparison Report (Note 1327399.1) The EBS ATG Seed Data Comparison Report provides report on the changes between different EBS releases based upon the seed data changes delivered by the product data loader files (.ldt extension) based on EBS ATG loader control (.lct extension) files.  You can use this new tool to report on the differences in the following types of seed data: Concurrent Program definitions Descriptive Flexfield entity definitions Application Object Library profile option definitions Application Object Library (AOL) key flexfield, function, lookups, value set definitions Application Object Library (AOL) menu and responsibility definitions Application Object Library messages Application Object Library request set definitions Application Object Library printer styles definitions Report Manager / WebADI component and integrator entity definitions Business Intelligence Publisher (BI Publisher) entity definitions BIS Request Set Generator entity definitions ... and more Your feedback is welcomeThis new tool was produced by our hard-working EBS Release Management team, and they're actively seeking your feedback.  Please feel free to share your experiences with it by posting a comment here.  You can also request enhancements to this tool via the distribution list address included in Note 1327399.1.Related Articles Oracle E-Business Suite Release 12.1.3 Now Available New Whitepaper: Upgrading EBS 11i Forms + OA Framework Personalizations to EBS 12 EBS 12.0 Minimum Requirements for Extended Support Finalized Five Key Resources for Upgrading to E-Business Suite Release 12 E-Business Suite Release 12.1.1 Consolidated Upgrade Patch 1 Now Available New Whitepaper: Planning Your E-Business Suite Upgrade from Release 11i to 12.1

    Read the article

  • Hosted EBS 11i Integration Repository Temporarily Offline

    - by Steven Chan (Oracle Development)
    Most developers know that they can integrate their external applications with the E-Business Suite via the business service interfaces and SOA service endpoints documented in the E-Business Suite's Integration Repository.  This is shipped as part of EBS 12.  Until recently, it was provided as a hosted environment on the Oracle.com domain for EBS 11i. Unfortunately, we identified some standards-related issues in the process of switching from the existing server that hosts the EBS 11i environment to a new one, notably in the area of accessibility. Some of those issues will require coding changes to resolve.  Given our focus on EBS 12.2 right now, it may take some time to prioritize this relative to our other existing commitments. In the meantime, we are required to suspend access to the EBS 11i Integration Repository.  I don't have a firm schedule for getting this back online yet, but you're welcome to monitor or subscribe to this blog. I'll post updates here as soon as soon as they're available.    Related Articles Integration Repository for the E-Business Suite New Whitepaper: Primer on Integrating with EBS 12 with Other Applications

    Read the article

  • Social Media Stations for Partners

    - by Oracle OpenWorld Blog Team
    By Stephanie Spada One of our exciting additions to this year’s Oracle Partner Network Exchange @ OpenWorld are Social Media Stations.  Partners have the opportunity to get customized, face-to-face expert advice on how they can better engage their customers and find new prospects online using social media tools.When: Sunday, September 30Time: 3:00 p.m.–5:00 p.m.Where: Moscone South, Esplanade levelWhen: Monday, October 1Time:  9:30 a.m.–6:00 p.m.Where: Moscone South, OPN Lounge, Exhibitor levelEach customized social media consultation will take only 25 minutes. Here’s how it works:·    Partners check in with a Social Media Rally coordinator who will assess needs and make the right connections for each session·    Partners go to the Photo Station, where a headshot will be taken that can be used on social profiles, Websites or for articles and posts across the Web·    Partners meet with the One-2-One consultants who will walk them through how they’re using social media today and what next steps could beSocial media channels/methods discussed can include Google+, Google Alerts, Google Analytics, Facebook, LinkedIn, Search Engine Optimization, Twitter, and more.  With so many choices, partners can decide how to focus their time.To get the most out of the Social Media Stations, partners should:·    Wear appropriate attire for the headshot photo·    Bring log-in information for social platforms they want to discuss·    Come prepared with questions for the One-2-One consultation so session time can be maximizedFor questions, or to schedule a session ahead of time, partners should send an email to: [email protected].

    Read the article

  • Procurement and E-Business Suite Product Analyzers .. Can you use this tool to resolve your SR?

    - by LindaJ-Oracle
    Procurement and E-Business Suite Product Analyzers (Doc ID 1545562.1). Analyzers are Query/Read only tools with easy to read html output. The tools are delivered by EBS Support via My Oracle Support documents ids for ease of use. The Analyzer scripts are meant to be part of your Production maintenance program by your Sysadmin, or to designated end users. The result set is an easy to read html output that provides recommendations, solutions and early warnings to of items that should be reviewed and correct. Each analyzer can be ran on demand or scheduled for repeatability and emailed to critical reviewers. There are several Analyzers available for E-Business Suite Applications Technology Group, Financials, and Manufacturing including some of the following topics.  Review them all at (Doc ID 1545562.1). Workflow Concurrent Processing Clone Log Parser Utility (Rapid Clone) Invoices, Payments, Accounting, Suppliers and EBTax Validate Data before Period Close EBTax Setup Payables Trial Balance Internet Expenses AutoInvoice Post-Process ASCP Performance PO Approval iProcurement Items For the Procurement specific Analyzers access them directly at: R12 IP Item Analyzer Diagnostic Script (Doc ID 1586248.1) R12: PO Approval Analyzer Diagnostic Script (Doc ID 1525670.1)

    Read the article

  • How to Disable Access to the Registry in Windows 7

    - by Mysticgeek
    If you don’t know what your doing in the Registry, you can mess up your computer pretty good. Today we show you how to prevent users from accessing the Registry and making any changes to it. Using Local Group Policy Editor Note: This method uses Group Policy Editor which is not available in Home versions of Windows. First type gpedit.msc into the Search box in the Start menu. When Group Policy Editor opens, navigate to User Configuration \ Administrative Templates then select System. Under Setting in the right panel double-click on Prevent access to registry editing tools. Select the radio button next to Enabled, click OK, then close out of Group Policy Editor. Now if a user tries to access the Registry… They will get the following message advising they cannot access it.   Using Registry Enabler & Disabler 3 If you’re using Home or Starter version of Windows 7, you can use a neat utility called Registry Enabler & Disabler (link below). This app works on XP and Vista as well. There is no installation involved so you can run it from a flash drive, disable the registry, then take the flash drive with you while a the user is on the machine.   Again, if the user tries to access the Registry they will get the following error… Using one of these options will stop users from gaining access to the Registry or running any registry hacks. Of course if you have a shared computer, you may want to set up other users with a Standard Account, as they won’t be able to make changes to the Registry anyway. Download Registry Enabler & Disabler 3 Similar Articles Productive Geek Tips Disable Notification Balloons in XPDisable/Enable Lock Workstation Functionality (Windows + L)Disable Windows Mobility Center in Windows 7 or VistaRegistry Hack to Disable Writing to USB DrivesSpeed Up Disk Access by Disabling Last Access Updating in Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • How-to tell the ViewCriteria a user chose in an af:query component

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The af:query component defines a search form for application users to enter search conditions for a selected View Criteria. A View Criteria is a named where clauses that you can create declaratively on the ADF Business Component View Object. A default View Criteria that allows users to search in all attributes exists by default and exposed in the Data Controls panel. To create an ADF Faces search form, expand the View Object node that contains the View Criteria definition in the Data Controls panel. Drag the View Criteria that should be displayed as the default criteria onto the page and choose Query in the opened context menu. One of the options within the Query option is to create an ADF Query Panel with Table, which displays the result set in a table view, which can have additional column filters defined. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} To intercept the user query for modification, or just to know about the selected View Criteria, you override the QueryListener property on the af:query component of the af:table component. Overriding the QueryListener on the table makes sense if the table allows users to further filter the result set using column filters.To override the default QueryListener, copy the existing string referencing the binding layer to the clipboard and then select Edit from the field context menu (press the arrow icon to open it) to selecte or create a new managed bean and method to handle the query event.  The code below is from a managed bean with custom query listener handlers defined for the af:query component and the af:table component. The default listener entry copied to the clipboard was "#{bindings.ImplicitViewCriteriaQuery.processQuery}"  public void onQueryList(QueryEvent queryEvent) {   // The generated QueryListener replaced by this method   //#{bindings.ImplicitViewCriteriaQuery.processQuery}        QueryDescriptor qdes = queryEvent.getDescriptor();          //print or log selected View Criteria   System.out.println("NAME "+qdes.getName());           //call default Query Event        invokeQueryEventMethodExpression("      #{bindings.ImplicitViewCriteriaQuery.processQuery}",queryEvent);  } public void onQueryTable(QueryEvent queryEvent) {   // The generated QueryListener replaced by this method   //#{bindings.ImplicitViewCriteriaQuery.processQuery}   QueryDescriptor qdes = queryEvent.getDescriptor();   //print or log selected View Criteria   System.out.println("NAME "+qdes.getName());                   invokeQueryEventMethodExpression(     "#{bindings.ImplicitViewCriteriaQuery.processQuery}",queryEvent); } private void invokeQueryEventMethodExpression(                        String expression, QueryEvent queryEvent){   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory efactory   fctx.getApplication().getExpressionFactory();     MethodExpression me =     efactory.createMethodExpression(elctx,expression,                                     Object.class,                                     new Class[]{QueryEvent.class});     me.invoke(elctx, new Object[]{queryEvent}); } Of course, this code also can be used as a starting point for other query manipulations and also works with saved custom criterias. To read more about the af:query component, see: http://download.oracle.com/docs/cd/E15523_01/apirefs.1111/e12419/tagdoc/af_query.html

    Read the article

  • SQL SERVER – Fix: Error: 147 An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference

    - by pinaldave
    Everybody was beginner once and I always like to get involved in the questions from beginners. There is a big difference between the question from beginner and question from advanced user. I have noticed that if an advanced user gets an error, they usually need just a small hint to resolve the problem. However, when a beginner gets error he sometimes sits on the error for a long time as he/she has no idea about how to solve the problem as well have no idea regarding what is the capability of the product. I recently received a very novice level question. When I received the problem I quickly see how the user was stuck. When I replied him with the solution, he wrote a long email explaining how he was not able to solve the problem. He thanked multiple times in the email. This whole thing inspired me to write this quick blog post. I have modified the user’s question to match the code with AdventureWorks as well simplified so it contains the core content which I wanted to discuss. Problem Statement: Find all the details of SalesOrderHeaders for the latest ShipDate. He comes up with following T-SQL Query: SELECT * FROM [Sales].[SalesOrderHeader] WHERE ShipDate = MAX(ShipDate) GO When he executed above script it gave him following error: Msg 147, Level 15, State 1, Line 3 An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference. He was not able to resolve this problem, even though the solution was given in the query description itself. Due to lack of experience he came up with another version of above query based on the error message. SELECT * FROM [Sales].[SalesOrderHeader] HAVING ShipDate = MAX(ShipDate) GO When he ran above query it produced another error. Msg 8121, Level 16, State 1, Line 3 Column ‘Sales.SalesOrderHeader.ShipDate’ is invalid in the HAVING clause because it is not contained in either an aggregate function or the GROUP BY clause. What he wanted actually was the SalesOrderHeader all the Sales shipped on the last day. Based on the problem statement what the right solution is as following, which does not generate error. SELECT * FROM [Sales].[SalesOrderHeader] WHERE ShipDate = (SELECT MAX(ShipDate) FROM [Sales].[SalesOrderHeader]) Well, that’s it! Very simple. With SQL Server there are always multiple solution to a single problem. Is there any other solution available to the problem stated? Please share in the comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle B2B - Synchronous Request Reply

    - by cdwright
    Introduction So first off, let me say I didn't create this demo (although I did modify it some). I got it from a member of the B2B development technical staff. Since it came with only a simple readme file, I thought I would take some time and write a more detailed explanation about how it works. Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet. I’m attaching the demo to this blog which includes a SOA composite archive that needs to be deployed using JDeveloper, a B2B repository with two agreements that need to be deployed using the B2B console, and a test xml file that gets sent to the b2b/syncreceiver servlet using your favorite SOAP test tool (I'm using Firefox Poster here). You can download the zip file containing the demo here. The demo works by sending the sample xml request file (req.xml) to http://<b2bhost>:8001/b2b/syncreceiver using the SOAP test tool.  The syncreceiver servlet keeps the socket connection open between itself and the test tool so that it can synchronously send the reply message back. When B2B receives the inbound request message, it is passed to the SOA composite through the default B2B Fabric binding. A simple reply is created in BPEL and returned to B2B which then sends the message back to the test tool using that same socket connection. I’ll show you the B2B configuration first, then we’ll look at the soa composite. Configuring B2B No additional configuration necessary in order to use the syncreceiver servlet. It is already running when you start SOA. After importing the GC_SyncReqRep.zip repository file into B2B, you’ll have the typical GlobalChips host trading partner and the Acme remote trading partner. Document Management The repository contains two very simple custom XML document definitions called Orders and OrdersResponse. In order to determine the trading partner agreement needed to process the inbound Orders document, you need to know two things about it; what is it and where it came from. So let’s look at how B2B identifies the appropriate document definition for the message. The XSD’s for these two document definitions themselves are not particularly interesting. Whenever you're dealing with custom XML documents, B2B identifies the appropriate document definition for each XML message using an XPath Identification Expression. The expression is entered for each of these document definitions under the document administration tab in the B2B console. The full XPATH expression for the Orders document is  //*[local-name()='shiporder']/*[local-name()='shipto']/*[local-name()='name']/text(). You can see this path in the XSD diagram below and how it uniquely identifies this message. The OrdersReponse document is identified in the same way. The XPath expression for it is //*[local-name()='Response']/*[local-name()='Status']/text(). You can see how it’s path differs uniquely identifying the reply from the request. Trading Partner Profile The trading partner profiles are very simple too. For GlobalChips, a generic identifier is being used to identify the sender of the response document using the host trading partner name. For Acme, a generic identifier is also being used to identify the sender of the inbound request using the remote trading partner name. The document types are added for the remote trading partner as usual. So the remote trading partner Acme is the sender of the Orders document, and it is the receiver of the OrdersResponse document. For the remote trading partner only, there needs to be a dummy channel which gets used in the outbound response agreement. The channel is not actually used. It is just a necessary place holder that needs to be there when creating the agreement. Trading Partner Agreement The agreements are equally simple. There is no validation and translation is not an option for a custom XML document type. For the InboundAgreement (request) the document definition is set to OrdersDef. In the Agreement Parameters section the generic identifiers have been added for the host and remote trading partners. That’s all that is needed for the inbound transaction. For the OutboundAgreement (response), the document definition is set to OrdersResponseDef and the generic identifiers for the two trading partners are added. The remote trading partner dummy delivery channel is also added to the agreement. SOA Composite Import the SOA composite archive into JDeveloper as an EJB JAR file. Open the composite and you should have a project that looks like this. In the composite, open the b2bInboundSyncSvc exposed service and advance through the setup wizard. Select your Application Server Connection and advance to the Operations window. Notice here that the B2B binding is set to Receive. It is not set for Synchronous Request Reply. Continue advancing through the wizard as you normally would and select finish at the end. Now open BPELProcess1 in the composite. The BPEL process is set as a Synchronous Request Reply as you can see below. The while loop is there just to give the process something to do. The actual reply message is prepared in the assignResponseValues assignment followed by an Invoke of the B2B binding. Open the replyResponse Invoke and go to the properties tab. You’ll see that the fromTradingPartnerId, toTradingPartner, documentTypeName, and documentProtocolRevision properties have been set. Testing the Configuration To test the configuration, I used Firefox Poster. Enter the URL for the b2b/syncreceiver servlet and browse for the req.xml file that contains the test request message. In the Headers tab, add the property ‘from’ and give it the value ‘Acme’. This is how B2B will know where the message is coming from and it will use that information along with the document type name to find the right trading partner agreement. Now post the message. You should get back a response with a status of ‘200 OK’. That’s all there is to it.

    Read the article

  • Apache-Mina FTPServer Issue — unable to login into apache ftp server while using database user manager

    - by piyush
    I am unable to login into apache ftp server while using database user manager: while entering username and password,I am getting following error in log file: [ INFO] 2013-02-07 20:51:07,779 [] [0:0:0:0:0:0:0:1] RECEIVED: USER piyush [ INFO] 2013-02-07 20:51:07,781 [piyush] [0:0:0:0:0:0:0:1] SENT: 331 User name okay, need password for piyush. [ INFO] 2013-02-07 20:51:07,784 [piyush] [0:0:0:0:0:0:0:1] RECEIVED: PASS ***** [ WARN] 2013-02-07 20:51:07,785 [piyush] [0:0:0:0:0:0:0:1] User failed to log in [ WARN] 2013-02-07 20:51:08,285 [piyush] [0:0:0:0:0:0:0:1] Login failure - piyush [ INFO] 2013-02-07 20:51:08,286 [piyush] [0:0:0:0:0:0:0:1] SENT: 530 Authentication failed. [ INFO] 2013-02-07 20:51:08,286 [piyush] [0:0:0:0:0:0:0:1] RECEIVED: QUIT [ INFO] 2013-02-07 20:51:08,290 [piyush] [0:0:0:0:0:0:0:1] SENT: 221 Goodbye. [ INFO] 2013-02-07 20:51:08,291 [piyush] [0:0:0:0:0:0:0:1] CLOSED here is my xml file ftpd-typical.xml: <?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <server xmlns="http://mina.apache.org/ftpserver/spring/v1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans" xsi:schemaLocation=" http://mina.apache.org/ftpserver/spring/v1 http://mina.apache.org/ftpserver/ftpserver-1.0.xsd " id="Prometheus"> <listeners> <nio-listener name="default" port="2121" /> </listeners> <db-user-manager encrypt-passwords="salted"> <data-source> <beans:bean class="org.apache.commons.dbcp.BasicDataSource" > <beans:property name="driverClassName" value="com.mysql.jdbc.Driver" /> <beans:property name="url" value="jdbc:mysql://localhost/apache_test" /> <beans:property name="username" value="amy" /> <beans:property name="password" value="piyush" /> </beans:bean> </data-source> <insert-user>INSERT INTO FTP_USER (userid, userpassword, homedirectory, enableflag, writepermission, idletime, uploadrate, downloadrate) VALUES ('{userid}', '{userpassword}', '{homedirectory}', {enableflag}, {writepermission}, {idletime}, {uploadrate}, {downloadrate}) </insert-user> <update-user>UPDATE FTP_USER SET userpassword='{userpassword}',homedirectory='{homedirectory}',enableflag={enableflag},writepermission={writepermission},idletime={idletime},uploadrate={uploadrate},downloadrate={downloadrate} WHERE userid='{userid}' </update-user> <delete-user>DELETE FROM FTP_USER WHERE userid = '{userid}' </delete-user> <select-user>SELECT userid, userpassword, homedirectory, enableflag, writepermission, idletime, uploadrate, downloadrate, maxloginnumber, maxloginperip FROM FTP_USER WHERE userid = '{userid}' </select-user> <select-all-users>SELECT userid FROM FTP_USER ORDER BY userid </select-all-users> <is-admin>SELECT userid FROM FTP_USER WHERE userid='{userid}' AND userid='admin' </is-admin> <authenticate>SELECT userpassword from FTP_USER WHERE userid='{userid}'</authenticate> </db-user-manager> </server>

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >