Search Results

Search found 37654 results on 1507 pages for 'function prototypes'.

Page 1274/1507 | < Previous Page | 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281  | Next Page >

  • Triangle Line-Segment Intersection - detecting near misses

    - by Will
    A ray is a very poor approximation of a player! I think approximating a player with a sphere traveling a straight line each game tick will solve my problems of the player intersecting edges of scenery because their line segment missed it yet their own model is not infinitely thin... I have a 3D triangle and a line segment. I have the normal triangle-line-segment intersection code which I admit I have only a woolly grasp of. To model movement and compute collisions of the player I have to determine if a line passes within sphere-radius of a triangle. But I can find no convenient line near-miss intersection code! Here's the classic triangle intersection ### commented ### code with my starting assumptions: function triangle_ray_intersection(a,b,c,ray_origin,ray_dir,ray_radius) { // http://softsurfer.com/Archive/algorithm_0105/algorithm_0105.htm#intersect_RayTriangle%28%29 // get triangle edge vectors and plane normal var u = vec3_sub(b,a); var v = vec3_sub(c,a); var n = vec3_cross(u,v); if(n[0]==0 && n[1]==0 && n[2]==0) return null; // triangle is degenerate var w0 = vec3_sub(ray_origin,a); var j = vec3_dot(n,ray_dir); if(Math.abs(j) < 0.00000001) { //### if parallel, might still pass within ray_radius of it return null; // parallel, disjoint or on plane } var i = -vec3_dot(n,w0); // get intersect point of ray with triangle plane var k = i / j; if(k < 0.0) return null; // ray goes away from triangle //### as its a line segment, k > 1+ray_radius means no intersect var hit = vec3_add(ray_origin,vec3_scale(ray_dir,k)); // intersect point of ray and plane // is I inside T? //### here I'm a bit lost; this is presumably computing barycentric coordinates? var uu = vec3_dot(u,u); var uv = vec3_dot(u,v); var vv = vec3_dot(v,v); var w = vec3_sub(hit,a); var wu = vec3_dot(w,u); var wv = vec3_dot(w,v); var D = uv * uv - uu * vv; var s = (uv * wv - vv * wu) / D; //### therefore, compute if its within ray_radius scaled to the 0..1 of barycentric coordinates? if(s<0.0 || s>1.0) return null; // I is outside T var t = (uv * wu - uu * wv) / D; if(t<0.0 || (s+t)>1.0) return null; // I is outside T //### finally, if it passses a barycentric test it might still be too far //### to a point; must check that its distance from a corner is within ray_radius too if more than one barycentric coord is >1 //### so we have rounded corners... return [hit,n]; // I is in T } Given the distance between the point of plane intersection and each corner, I ought to be able to determine distance at world scale of how far beyond the edge - beyond 1.0 in barycentric coordinates for each axis - that point is... At this point my head explodes! Is this the right track? What's the actual code? UPDATE: you can earn 100 pts on SO if you answer this question there...! How can you determine if a line segment passes within some distance of a triangle?

    Read the article

  • Creating my first F# program in my new &ldquo;Expert F# Book&rdquo;

    - by MarkPearl
    So I have a brief hour or so that I can dedicate today to reading my F# book. It’s a public holiday and my wife’s birthday and I have a ton of assignments for UNISA that I need to complete – but I just had to try something in F#. So I read chapter 1 – pretty much an introduction to the rest of the book – it looks good so far. Then I get to chapter 2, called “Getting Started with F# and .NET”. Great, there is a code sample on the first page of the chapter. So I open up VS2010 and create a new F# console project and type in the code which was meant to analyze a string for duplicate words… #light let wordCount text = let words = Split [' '] text let wordset = Set.ofList words let nWords = words.Length let nDups = words.Length - wordSet.Count (nWords, nDups) let showWordCount text = let nWords,nDups = wordCount text printfn "--> %d words in text" nWords printfn "--> %d duplicate words" nDups   So… bad start - VS does not like the “Split” method. It gives me an error message “The value constructor ‘Split’ is not defined”. It also doesn’t like wordSet.Count telling me that the “namespace or module ‘wordSet’ is not defined”. ??? So a bit of googling and it turns out that there was a bit of shuffling of libraries between the CTP of F# and the Beta 2 of F#. To have access to the Split function you need to download the F# PowerPack and hen reference it in your code… I download and install the powerpack and then add the reference to FSharp.Core and FSharp.PowerPack in my project. Still no luck! Some more googling and I get the suggestions I got were something like this…#r "FSharp.PowerPack.dll";; #r "FSharp.PowerPack.Compatibility.dll";; So I add the code above to the top of my Program.fs file and still no joy… I now get an error message saying… Error    1    #r directives may only occur in F# script files (extensions .fsx or .fsscript). Either move this code to a script file, add a '-r' compiler option for this reference or delimit the directive with '#if INTERACTIVE'/'#endif'. So what does that mean? If I put the code straight into the F# interactive it works – but I want to be able to use it in a project. The C# equivalent I would think would be the “Using” keyword. The #r doesn’t seem like it should be in the FSharp code. So I try what the compiler suggests by doing the following…#if INTERACTIVE #r "FSharp.PowerPack.dll";; #r "FSharp.PowerPack.Compatibility.dll";; #endif No luck, the Split method is still not recognized. So wait a second, it mentioned something about FSharp.PowerPack.Compatibility.dll – I haven’t added this as a reference to my project so I add it and remove the two lines of #r code. Partial success – the Split method is now recognized and not underlined, but wordSet.Count is still not working. I look at my code again and it was a case error – the original wordset was mistyped comapred to the wordSet. Some case correction and the compiler is no longer complaining. So the code now seems to work… listed below…#light let wordCount text = let words = String.split [' '] text let wordSet = Set.ofList words let nWords = words.Length let nDups = words.Length - wordSet.Count (nWords, nDups) let showWordCount text = let nWords,nDups = wordCount text printfn "--> %d words in text" nWords printfn "--> %d duplicate words" nDups  So recap – if I wanted to use the interactive compiler then I need to put the #r code. In my mind this is the equivalent of me adding the the references to my project. If however I want to use the powerpack in a project – I just need to make sure that the correct references are there. I feel like a noob once again!

    Read the article

  • How to Set Up Your Enterprise Social Organization?

    - by Richard Lefebvre
    By Mike Stiles on Dec 04, 2012 The rush for business organizations to establish, grow, and adopt social was driven out of necessity and inevitability. The result, however, was a sudden, booming social presence creating touch points with customers, partners and influencers, but without any corporate social organization or structure in place to effectively manage it. Even today, many business leaders remain uncertain as to how to corral this social media thing so that it makes sense for their enterprise. Imagine their panic when they hear one of the most beneficial approaches to corporate use of social involves giving up at least some hierarchical control and empowering employees to publicly engage customers. And beyond that, they should also be empowered, regardless of their corporate status, to engage and collaborate internally, spurring “off the grid” innovation. An HBR blog points out that traditionally, enterprise organizations function from the top down, and employees work end-to-end, structured around business processes. But the social enterprise opens up structures that up to now have not exactly been embraced by turf-protecting executives and managers. The blog asks, “What if leaders could create a future where customers, associates and suppliers are no longer seen as objects in the system but as valued sources of innovation, ideas and energy?” What if indeed? The social enterprise activates internal resources without the usual obsession with position. It is the dawn of mass collaboration. That does not, however, mean this mass collaboration has to lead to uncontrolled chaos. In an extended interview with Oracle, Altimeter Group analyst Jeremiah Owyang and Oracle SVP Reggie Bradford paint a complete picture of today’s social enterprise, including internal organizational structures Altimeter Group has seen emerge. One sign of a mature social enterprise is the establishing of a social Center of Excellence (CoE), which serves as a hub for high-level social strategy, training and education, research, measurement and accountability, and vendor selection. This CoE is led by a corporate Social Strategist, most likely from a Marketing or Corporate Communications background. Reporting to them are the Community Managers, the front lines of customer interaction and engagement; business unit liaisons that coordinate the enterprise; and social media campaign/product managers, social analysts, and developers. With content rising as the defining factor for social success, Altimeter also sees a Content Strategist position emerging. Across the enterprise, Altimeter has seen 5 organizational patterns. Watching the video will give you the pros and cons of each. Decentralized - Anyone can do anything at any time on any social channel. Centralized – One central groups controls all social communication for the company. Hub and Spoke – A centralized group, but business units can operate their own social under the hub’s guidance and execution. Most enterprises are using this model. Dandelion – Each business unit develops their own social strategy & staff, has its own ability to deploy, and its own ability to engage under the central policies of the CoE. Honeycomb – Every employee can do social, but as opposed to the decentralized model, it’s coordinated and monitored on one platform. The average enterprise has a whopping 178 social accounts, nearly ¼ of which are usually semi-idle and need to be scrapped. The last thing any C-suite needs is to cope with fragmented technologies, solutions and platforms. It’s neither scalable nor strategic. The prepared, effective social enterprise has a technology partner that can quickly and holistically integrate emerging platforms and technologies, such that whatever internal social command structure you’ve set up can continue efficiently executing strategy without skipping a beat. @mikestiles

    Read the article

  • Bluetooth not found on BCM43228

    - by TK Kocheran
    I've got a Broadcom BCM43228 mPCIe card which came with my motherboard (ASUS ROG Maximus V Extreme, can't seem to find a link to what the card is) which is working great for WiFi right now, but I can't detect the Bluetooth hardware onboard. In Windows, I have full Bluetooth 4.0 support. $ lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.4 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 5 (rev c4) 00:1c.6 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 7 (rev c4) 00:1c.7 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 8 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1189 (rev a1) 01:00.1 Audio device: NVIDIA Corporation Device 0e0a (rev a1) 0d:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller 0e:00.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:01.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:04.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:05.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:06.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:07.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:08.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:09.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 10:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller 12:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01) 15:00.0 Network controller: Broadcom Corporation BCM43228 802.11a/b/g/n 17:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01) The key line seems to be: 15:00.0 Network controller: Broadcom Corporation BCM43228 802.11a/b/g/n If I try to detect the Bluetooth card, I don't see anything: $ hcitool dev Devices: rfkill list all: Output lspci: Output lsusb: Output I finally found the card with usb-devices: T: Bus=01 Lev=02 Prnt=02 Port=00 Cnt=01 Dev#= 3 Spd=12 MxCh= 0 D: Ver= 2.00 Cls=ff(vend.) Sub=01 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=0b05 ProdID=17b5 Rev=01.12 S: Manufacturer=Broadcom Corp S: Product=BCM20702A0 S: SerialNumber=############ C: #Ifs= 4 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=01 Prot=01 Driver=(none) I: If#= 1 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=01 Prot=01 Driver=(none) I: If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none) I: If#= 3 Alt= 0 #EPs= 0 Cls=fe(app. ) Sub=01 Prot=01 Driver=(none) I've heard that this card needs to have firmware injected into it in order to function. If that's the case, how do I do it?

    Read the article

  • SQLAuthority News – Monthly list of Puzzles and Solutions on SQLAuthority.com

    - by pinaldave
    This month has been very interesting month for SQLAuthority.com we had multiple and various puzzles which everybody participated and lots of interesting conversation which we have shared. Let us start in latest puzzles and continue going down. There are few answers also posted on facebook as well. SQL SERVER – Puzzle Involving NULL – Resolve – Error – Operand data type void type is invalid for sum operator This puzzle involves NULL and throws an error. The challenge is to resolve the error. There are multiple ways to resolve this error. Readers has contributed various methods. Few of them even have supplied the answer why this error is showing up. NULL are very important part of the database and if one of the column has NULL the result can be totally different than the one expected. SQL SERVER – T-SQL Scripts to Find Maximum between Two Numbers I modified script provided by friend to find greatest number between two number. My script has small bug in it. However, lots of readers have suggested better scripts. Madhivanan has written blog post on the subject over here. SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints This quiz is hosted on my friend Jacob‘s site. I have written many hints how one can tune cubes. Now one can take part here and win exciting prizes. SQL SERVER – Solution – Generating Zero Without using Any Numbers in T-SQL Madhivanan has asked very interesting question on his blog about How to Generate Zero without using Any Numbers in T-SQL. He has demonstrated various methods how one can generate Zero. I asked the same question on blog and got many interesting answers which I have shared. SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once I have to accept that this was most difficult puzzle. In this puzzle I have asked even though settings are correct, why statistics of the tables are not getting updated. In this puzzle one is tested with various concepts 1) Indexes, 2) Statistics, 3) database settings etc. There are multiple ways of solving this puzzles. It was interesting as many took interest but only few got it right. SQL SERVER – Question to You – When to use Function and When to use Stored Procedure This is rather straight forward question and not the typical puzzle. The answers from readers are great however, still there is chance of more detailed answers. SQL SERVER – Selecting Domain from Email Address I wrote on selecting domains from email addresses. Madhivanan makes puzzle out of a simple question. He wrote a follow-up post over here. In his post he writes various way how one can find email addresses from list of domains. Well, this is not a puzzle but amazing Guest Post by Feodor Georgiev who has written on subject Job Interviewing the Right Way (and for the Right Reasons). An article which everyone should read. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Installing CUDA on Ubuntu 12.04 with nvidia driver 295.59

    - by johnmcd
    I have been trying to get cuda to run on a nvidia gt 650m based laptop. I am running Ubuntu 12.04 with the nvidia 295.59 driver. Also, my laptop uses Optimus so I have install the driver via bumblebee. Bumblebee is not working correctly yet -- however I believe it is possible to install CUDA independently. To install CUDA I have followed the instructions detailed here: How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics? However I am still running into problem building the sdk. I made the changes specified at the above link in common.mk, but I got the following (snippet) from the build process: make[2]: Entering directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/fluidsGL' /usr/bin/ld: warning: libnvidia-tls.so.302.17, needed by /usr/lib/nvidia-current/libGL.so, not found (try using -rpath or -rpath-link) /usr/bin/ld: warning: libnvidia-glcore.so.302.17, needed by /usr/lib/nvidia-current/libGL.so, not found (try using -rpath or -rpath-link) /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv018tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv012glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv017glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv012tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv015tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv019tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv000glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv017tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv013tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv013glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv018glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv022tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv007tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv009tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv020tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv014glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv015glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv016tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv001glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv006tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv021tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv011tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv020glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv019glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv002glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv021glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv014tls' collect2: ld returned 1 exit status make[2]: *** [../../bin/linux/release/fluidsGL] Error 1 make[2]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/fluidsGL' make[1]: *** [src/fluidsGL/Makefile.ph_build] Error 2 make[1]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C' make: *** [all] Error 2 The libraries that ld warns about are on my system and are installed on the system: $ locate libnvidia-tls.so.302.17 libnvidia-glcore.so.302.17 /usr/lib/nvidia-current/libnvidia-glcore.so.302.17 /usr/lib/nvidia-current/libnvidia-tls.so.302.17 /usr/lib/nvidia-current/tls/libnvidia-tls.so.302.17 /usr/lib32/nvidia-current/libnvidia-glcore.so.302.17 /usr/lib32/nvidia-current/libnvidia-tls.so.302.17 /usr/lib32/nvidia-current/tls/libnvidia-tls.so.302.17 however /usr/lib/nvidia-current and /usr/lib32/nvidia-current are not being picked up by ldconfig. I have tried adding them by adding a file to /etc/ld.so.conf.d/ which gets past this error, however now I am getting the following error: make[2]: Entering directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/deviceQueryDrv' cc1plus: warning: command line option ‘-Wimplicit’ is valid for C/ObjC but not for C++ [enabled by default] obj/x86_64/release/deviceQueryDrv.cpp.o: In function `main': deviceQueryDrv.cpp:(.text.startup+0x5f): undefined reference to `cuInit' deviceQueryDrv.cpp:(.text.startup+0x99): undefined reference to `cuDeviceGetCount' deviceQueryDrv.cpp:(.text.startup+0x10b): undefined reference to `cuDeviceComputeCapability' deviceQueryDrv.cpp:(.text.startup+0x127): undefined reference to `cuDeviceGetName' deviceQueryDrv.cpp:(.text.startup+0x16a): undefined reference to `cuDriverGetVersion' deviceQueryDrv.cpp:(.text.startup+0x1f0): undefined reference to `cuDeviceTotalMem_v2' deviceQueryDrv.cpp:(.text.startup+0x262): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x457): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x4bc): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x502): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x533): undefined reference to `cuDeviceGetAttribute' obj/x86_64/release/deviceQueryDrv.cpp.o:deviceQueryDrv.cpp:(.text.startup+0x55e): more undefined references to `cuDeviceGetAttribute' follow collect2: ld returned 1 exit status make[2]: *** [../../bin/linux/release/deviceQueryDrv] Error 1 make[2]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/deviceQueryDrv' make[1]: *** [src/deviceQueryDrv/Makefile.ph_build] Error 2 make[1]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C' make: *** [all] Error 2 I would appreciate any help that anyone can provide me with. If I can provide any further information please let me know. Thanks.

    Read the article

  • SQL SERVER – Updating Data in A Columnstore Index

    - by pinaldave
    So far I have written two articles on Columnstore Indexes, and both of them got very interesting readership. In fact, just recently I got a query on my previous article on Columnstore Index. Read the following two articles to get familiar with the Columnstore Index. They will give you a reference to the question which was asked by a certain reader: SQL SERVER – Fundamentals of Columnstore Index SQL SERVER – How to Ignore Columnstore Index Usage in Query Here is the reader’s question: ” When I tried to update my table after creating the Columnstore index, it gives me an error. What should I do?” When the Columnstore index is created on the table, the table becomes Read-Only table and it does not let any insert/update/delete on the table. The basic understanding is that Columnstore Index will be created on the table that is very huge and holds lots of data. If a table is small enough, there is no need to create a Columnstore index. The regular index should just help it. The reason why Columnstore index was needed is because the table was so big that retrieving the data was taking a really, really long time. Now, updating such a huge table is always a challenge by itself. If the Columnstore Index is created on the table, and the table needs to be updated, you need to know that there are various ways to update it. The easiest way is to disable the Index and enable it. Consider the following code: USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO /* It will throw following error Msg 35330, Level 15, State 1, Line 2 UPDATE statement failed because data cannot be updated in a table with a columnstore index. Consider disabling the columnstore index before issuing the UPDATE statement, then rebuilding the columnstore index after UPDATE is complete. */ A similar error also shows up for Insert/Delete function. Here is the workaround. Disable the Columnstore Index and performance update, enable the Columnstore Index: -- Disable the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] DISABLE GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO -- Rebuild the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] REBUILD GO This time it will not throw an error while the update of the table goes successfully. Let us do a cleanup of our tables using this code: -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In the next post we will see how we can use Partition to update the Columnstore Index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Asp.net Session State Revisited

    - by karan@dotnet
    Every now and then I see doubts and queries which I believe is the most discussed topic in the .net environment - Asp.net Sessions. So what really are they, why are they needed and what does browser and .net do with it. These and some of the other questions I hope to answer with this post. Because of the stateless nature of the HTTP protocol there is always a need of state management in a web application. There are many other ways to store data but I feel Session state is amongst the most powerful one. The ASP.NET session state is a technology that lets you store server-side, user-specific data. Our web applications can then use data to process request from the user for which the session state was instantiated. So when does a session is first created? When we start a asp.net application a non-expiring cookie is created and its called as ASP.NET_SessionId. Basically there are two methods for this depending upon how you configure this setting in your config file. The session ID can be a part of cookie as discussed above(called as ASP.NET_SessionId) or it is embedded in the browser’s URL. For the latter part we have to set cookie-less session in our web.config file. These Session ID’s are 120-bit random number that is represented by 20-character string. The cookie will be alive until you close your browser. If you browse from one app to another within the same domain, then both the apps will use the same session ID to track the session state. Why reuse? so that you don’t have to create a new session ID for each request. One can abandon one particular Session by calling Session.Abandon() which will stop the page processing and clear out the session data. A subsequent page request causes a brand new session object to be instantiated. So what happened to my cookie? Well the session cookie is still there even when one Session.Abandon() is called and another session object is created. The Session.Abandon() lets you clear out your session state without waiting for session timeout. By default, this time-out is a 20-minute sliding expiration. This expiration is refreshed every time that the user makes a request to the Web site and presents the session ID cookie. The Abandon method sets a flag in the session state object that indicates that the session state should be abandoned. If your app does not have global.asax then your session cookie will be killed at the end of each page request. So you need to have a global.asax file and Session_Start() handler to make sure that the session cookie will remain intact once its issued after the first page hit. The runtime invokes global.asax’s Session_OnEnd() when you call Session.Abandon() or the session times out. The session manager stores session data in HttpCache with sliding expiration where this timeout can be configured in the <sessionState> of web.config file. When the timeout is up the HttpCache will remove the session state object. Sometimes we want particular pages not to time out as compared to other pages in our applications. We can handle this in two ways. First, we can set a timer or may be a JavaScript function that refreshes the page after fixed intervals of time. The only thing being the page being cached locally and then the request is not made to the server so to prevent that you can add this to your page: <%@ OutputCache Location="None" VaryByParam="None" %> Second approach is to move your page into its own folder and then add a web.config to that folder to control the timeout. Also not all pages in your application will need access to session state. For those pages that do not, you can indicate that session state is not needed and prevent session data from being fetched from the store in requests to these pages. You can disable the session state at page level like this:<%@ Page EnableSessionState="False" %>tbc…

    Read the article

  • Having fun with Reflection

    - by Nick Harrison
    I was once asked in a technical interview what I could tell them about Reflection.   My response, while a little tongue in cheek was that "I can tell you it is one of my favorite topics to talk about" I did get a laugh out of that and it was a great ice breaker.    Reflection may not be the answer for everything, but it often can be, or maybe even should be.     I have posted in the past about my favorite CopyTo method.   It can come in several forms and is often very useful.   I explain it further and expand on the basic idea here  The basic idea is to allow reflection to loop through the properties of two objects and synchronize the ones that are in common.   I love this approach for data binding and passing data across the layers in an application. Recently I have been working on a project leveraging Data Transfer Objects to pass data through WCF calls.   We won't go into how the architecture got this way, but in essence there is a partial duplicate inheritance hierarchy where there is a related Domain Object for each Data Transfer Object.     The matching objects do not share a common ancestor or common interface but they will have the same properties in common.    By passing the problems with this approach, let's talk about how Reflection and our friendly CopyTo could make the most of this bad situation without having to change too much. One of the problems is keeping the two sets of objects in synch.   For this particular project, the DO has all of the functionality and the DTO is used to simply transfer data back and forth.    Both sets of object have parallel hierarchies with the same properties being defined at the corresponding levels.   So we end with BaseDO,  BaseDTO, GenericDO, GenericDTO, ProcessAreaDO,  ProcessAreaDTO, SpecializedProcessAreaDO, SpecializedProcessAreaDTO, TableDo, TableDto. and so on. Without using Reflection and a CopyTo function, tremendous care and effort must be made to keep the corresponding objects in synch.    New properties can be added at any level in the inheritance and must be kept in synch at all subsequent layers.    For this project we have come up with a clever approach of calling a base GetDo or UpdateDto making sure that the same method at each level of inheritance is called.    Each level is responsible for updating the properties at that level. This is a lot of work and not keeping it in synch can create all manner of problems some of which are very difficult to track down.    The other problem is the type of code that this methods tend to wind up with. You end up with code like this: Transferable dto = new Transferable(); base.GetDto(dto); dto.OfficeCode = GetDtoNullSafe(officeCode); dto.AccessIndicator = GetDtoNullSafe(accessIndicator); dto.CaseStatus = GetDtoNullSafe(caseStatus); dto.CaseStatusReason = GetDtoNullSafe(caseStatusReason); dto.LevelOfService = GetDtoNullSafe(levelOfService); dto.ReferralComments = referralComments; dto.Designation = GetDtoNullSafe(designation); dto.IsGoodCauseClaimed = GetDtoNullSafe(isGoodCauseClaimed); dto.GoodCauseClaimDate = goodCauseClaimDate;       One obvious problem is that this is tedious code.   It is error prone code.    Adding helper functions like GetDtoNullSafe help out immensely, but there is still an easier way. We can bypass the tedious code, by pass the complex inheritance tricks, and reduce all of this to a single method in the base class. TransferObject dto = new TransferObject(); CopyTo (this, dto); return dto; In the case of this one project, such a change eliminated the need for 20% of the total code base and a whole class of unit test cases that made sure that all of the properties were in synch. The impact of such a change also needs to include the on going time savings and the improvements in quality that can arise from them.    Developers who are not worried about keeping the properties in synch across mirrored object hierarchies are freed to worry about more important things like implementing business requirements.

    Read the article

  • how to drawing continues line just like in paint [on hold]

    - by hussain shah
    hi sir i want to draw a points.the following code is work good but the problem is than when i drag the mouse button, if i move slow working good but if i move the curser fast they cannot made continues line.please what is the solution...? #include <iostream> #include <GL/glut.h> #include <GL/glu.h> #include <stdlib.h> void first() { glPushMatrix(); glTranslatef(1,01,01); glScalef(1, 1, 1); glColor3f(0, 1, 0); glBegin(GL_QUADS); glVertex2f(0.8, 0.6); glVertex2f(0.6, 0.6); glVertex2f(0.6, 0.8); glVertex2f(0.8, 0.8); glEnd(); glPopMatrix(); glFlush(); } void display (void) { glClear(GL_COLOR_BUFFER_BIT); //store color of each pixels of a frame glClearColor(0, 0, 0, 0);// screen color //glFlush(); } void drag (int x, int y) { { y=500-y; //x=500-x; glPointSize(5); glColor3f(1.0,1.0,1.0); glBegin(GL_POINTS); glVertex2f(x,y+2); glEnd(); glutSwapBuffers(); glFlush(); } } void reshape (int w, int h){} void init (void) { glClear(GL_COLOR_BUFFER_BIT); //store color of each pixels of a frame glClearColor(0, 0, 0, 0); glViewport(0,0,500,500); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0.0, 500.0, 0.0, 500.0, 1.0, -1.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void mouse_button (int button, int state, int x, int y) { if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN) { drag(x,y); first(); } //else if (button == GLUT_MIDDLE_BUTTON && state == GLUT_DOWN) //{ // //} else if (button == GLUT_RIGHT_BUTTON && state == GLUT_DOWN) { exit(0); } } int main (int argc, char**argv) { glutInit (&argc, argv); //initialize the program. glutInitDisplayMode (GLUT_SINGLE); //set up a basic display buffer (only singular for now) glutInitWindowSize (500,500); //set whe width and height of the window glutInitWindowPosition (100, 100); //set the position of the window glutCreateWindow ("A basic OpenGL Window"); //set the caption for the window glutMotionFunc(drag); //glutMouseFunc(mouse_button); init(); glutDisplayFunc (display);//call the display function to draw our world glutMainLoop(); //initialize the OpenGL loop cycle return 0; }

    Read the article

  • Silverlight Cream for January 08, 2011 -- #1023

    - by Dave Campbell
    In this Heavy and yet incomplete Issue: Mike Wolf, Walter Ferrari, Colin Eberhardt, Mathew Charles, Don Burnett, Senthil Kumar, cherylws, Rob Miles, Derik Whittaker, Thomas Martinsen(-2-), Jason Ginchereau, Vishal Nayan, and WindowsPhoneGeek. Above the Fold: Silverlight: "Automatically Showing ToolTips on a Trimmed TextBlock (Silverlight)" Colin Eberhardt WP7: "Windows Phone Blue Book Pdf" Rob Miles Sharepoint/Silverlight: "Discover Sharepoint with Silverlight - Part 1" Walter Ferrari Shoutouts: Dave Isbitski has announced a WP7 Firestarter, check for your local MS office: Announcing the “Light up your Silverlight Applications for Windows 7 Firestarter” From SilverlightCream.com: Leveraging Silverlight in the USA TODAY Windows 7-Based Slate App Mike Wolf has a post up about Cynergy's release of the new USA TODAY software for Windows 7 Slate devices, and gives a great rundown of all the resources, and how specific Silverlight features were used... tons of outstanding external links here! Discover Sharepoint with Silverlight - Part 1 Walter Ferrari has tutorial up at SilverlightShow... looks like the first in a series on Silverlight and Sharepoint... lots of low-level info about the internals and using them. Automatically Showing ToolTips on a Trimmed TextBlock (Silverlight) Colin Eberhardt has a really cool AutoTooltip attached behavior that gives a tooltip of the actual text if text is trimmed ... and has an active demo on the post... very cool. RIA Services Output Caching Mathew Charles digs into a RIA feature that hasn't gotten any blog love: output caching, describing all the ins and outs of improving the performance of your app using caching. Emailing your Files to Box.net Cloud Storage with WP7 Don Burnett details out everything you need to do to get Box.Net and your WP7 setup to talk to each other. Shortcuts keys for Developing on Windows Phone 7 Emulator Senthil Kumar has some good WP7 posts up ... this one is a cheatsheet list of Function-key assignements for the WP7 emulator... another sidebar listint Windows Phone 7 Design Guidelines – Cheat Sheet cherylws has a great Guideline list/Cheat Sheet up for reference while building a WP7 app... this is a great reference... I'm adding it to the Right-hand sidebar of WynApse.com Windows Phone Blue Book Pdf Rob Miles has added another book and color to his collection of both -- Windows Phone Programming in C#, also known as the Windows Phone Blue Book... get a copy from the links he gives, and check out his other free books as well. Navigating to an external URL using the HyperlinkButton Derik Whittaker has a post up discussing the woes (and error messages) of trying to navigate to an external URL with the Hyperlink button in WP7, plus his MVVM-friendly solution that you can download. Set Source on Image from code in Silverlight Thomas Martinsen has a couple posts up... first is this quick one on the code required to set an image source. Show UI element based on authentication Thomas Martinsen's latest is one on a BoolToVisibilityConverter allowing a boolean indicator of Authentication to be used to control the visibility of a button (in the sample) WP7 ReorderListBox improvements: rearrange animations and more Jason Ginchereau has updated his ReorderListBox from last week to add some animations (fading/sliding) during the rearrangement. Navigation in Silverlight Without Using Navigation Framework Vishal Nayan has a post that attracted my attention... Navigation by manipulating RootVisual content... I've been knee-deep in similar code in Prism this week (and why my blogging is off) ... Creating a WP7 Custom Control in 7 Steps WindowsPhoneGeek creates a simple custom control for WP7 before your very eyes in his latest post, focusing on the minimum requirements necessary for writing a Custom Control. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Scrambling Sensitive Data in E-Business Suite Release 12 Cloned Environments

    - by Elke Phelps (Oracle Development)
    Securing the Oracle E-Business Suite includes protecting the underlying E-Business data in production and non-production databases.  While steps can be taken to provide a secure configuration to limit EBS access, a better approach to protecting non-production data is simply to scramble (mask) the data in the non-production copy.  You can use the Oracle Data Masking Pack with Oracle Enterprise Manager today to scramble sensitive data in cloned environments. Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function.  Using the Data Masking Pack in E-Business Suite environments is now easier with the release of new set of templates for E-Business Suite databases: Oracle E-Business Suite Release 12.1.3 Template for Data Masking Pack (Patch13898999) This template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments.  Is there a charge for this? Yes. You must purchase licenses for Oracle Enterprise Manager and the Oracle Data Masking Pack plug-in. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application. How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing.   References For additional information on the Oracle E-Business Suite Template for Data Masking Pack please refer to the following: Masking Sensitive Data for Non-production Use in the Oracle Enterprise Manager Concepts 11g Using the Oracle E-Business Suite, Release 12.1.3 Template for the Data Masking Pack, Note 1437485.1 Related Articles Webcast Replay Available: E-Business Suite Data Protection Oracle E-Business Suite Plug-in 4.0 Released for OEM 11g (11.1.0.1)

    Read the article

  • Database Mirroring on SQL Server Express Edition

    - by Most Valuable Yak (Rob Volk)
    Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition.  And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive!  It almost makes you want to go with Oracle.  That, and a desire to have Larry Ellison do things to your orifices. And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions.  Alas…they don't…officially anyway.  Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions! It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement: IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%' print 'Lame' else print 'Cool' If that statement returns true, it fails. (the print statements are just placeholders)  Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything). Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror.  Surprisingly, it's not actually implemented in SQL Server!  Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives. The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names.  You can create them using the MKLINK command in a command prompt: mklink /D D:\SkyDrive\Data D:\Data mklink /D D:\SkyDrive\Log D:\Log This creates a symbolic link from my data and log folders to my Skydrive folder.  Any file saved in either location will instantly appear in the other.  And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course). So what does this have to do with database mirroring?  Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link.  Although it doesn't actually use Skydrive, it performs the same function.  So in effect, the following statement: ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022' Is turned into: mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$" The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers.  I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality. To wrap this up, all you have to do is the following: Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples) On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log) Viola! Your databases are kept in sync on multiple servers! One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either.  So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline.  It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency. I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.

    Read the article

  • Implementing a modern web application with Web API on top of old services

    - by Gaui
    My company has many WCF services which may or may not be replaced in the near future. The old web application is written in WebForms and communicates straight with these services via SOAP and returns DataTables. Now I am designing a new modern web application in a modern style, an AngularJS client which communicates with an ASP.NET Web API via JSON. The Web API then communicates with the WCF services via SOAP. In the future I want to let the Web API handle all requests and go straight to the database, but because the business logic implemented in the WCF services is complicated it's going to take some time to rewrite and replace it. Now to the problem: I'm trying to make it easy in the near future to replace the WCF services with some other data storage, e.g. another endpoint, database or whatever. I also want to make it easy to unit test the business logic. That's why I have structured the Web API with a repository layer and a service layer. The repository layer has a straight communication with the data storage (WCF service, database, or whatever) and the service layer then uses the repository (Dependency Injection) to get the data. It doesn't care where it gets the data from. Later on I can be in control and structure the data returned from the data storage (DataTable to POCO) and be able to test the logic in the service layer with some mock repository (using Dependency Injection). Below is some code to explain where I'm going with this. But my question is, does this all make sense? Am I making this overly complicated and could this be simplified in any way possible? Does this simplicity make this too complicated to maintain? My main goal is to make it as easy as possible to switch to another data storage later on, e.g. an ORM and be able to test the logic in the service layer. And because the majority of the business logic is implemented in these WCF services (and they return DataTables), I want to be in control of the data and the structure returned to the client. Any advice is greatly appreciated. Update 20/08/14 I created a repository factory, so services would all share repositories. Now it's easy to mock a repository, add it to the factory and create a provider using that factory. Any advice is much appreciated. I want to know if I'm making things more complicated than they should be. So it looks like this: 1. Repository Factory public class RepositoryFactory { private Dictionary<Type, IServiceRepository> repositories; public RepositoryFactory() { this.repositories = new Dictionary<Type, IServiceRepository>(); } public void AddRepository<T>(IServiceRepository repo) where T : class { if (this.repositories.ContainsKey(typeof(T))) { this.repositories.Remove(typeof(T)); } this.repositories.Add(typeof(T), repo); } public dynamic GetRepository<T>() { if (this.repositories.ContainsKey(typeof(T))) { return this.repositories[typeof(T)]; } throw new RepositoryNotFoundException("No repository found for " + typeof(T).Name); } } I'm not very fond of dynamic but I don't know how to retrieve that repository otherwise. 2. Repository and service // Service repository interface // All repository interfaces extend this public interface IServiceRepository { } // Invoice repository interface // Makes it easy to mock the repository later on public interface IInvoiceServiceRepository : IServiceRepository { List<Invoice> GetInvoices(); } // Invoice repository // Connects to some data storage to retrieve invoices public class InvoiceServiceRepository : IInvoiceServiceRepository { public List<Invoice> GetInvoices() { // Get the invoices from somewhere // This could be a WCF, a database, or whatever using(InvoiceServiceClient proxy = new InvoiceServiceClient()) { return proxy.GetInvoices(); } } } // Invoice service // Service that handles talking to a real or a mock repository public class InvoiceService { // Repository factory RepositoryFactory repoFactory; // Default constructor // Default connects to the real repository public InvoiceService(RepositoryFactory repo) { repoFactory = repo; } // Service function that gets all invoices from some repository (mock or real) public List<Invoice> GetInvoices() { // Query the repository return repoFactory.GetRepository<IInvoiceServiceRepository>().GetInvoices(); } }

    Read the article

  • MEB Support to NetBackup MMS

    - by Hema Sridharan
    In MySQL Enterprise Backup 3.6, new option was introduced to support backup to tapes via SBT interface. SBT stands for System Backup to Tape, an Oracle API that helps to perform backup and restore jobs via media management software such as Oracle's Secure Backup (OSB). There are other storage managers like IBM's Tivoli Storage Manager (TSM) and Symantec's Netbackup (NB) which are also supported by MEB but we don't guarantee that it will function as expected for every release. MEB supports SBT API version 2.0 In this blog, I am primarily going to focus the interface of MEB and Symantec's NB. If we are using tapes for backup, ensure that tape library and tape drives are compatible. Test Setup 1. Install NB 7.5 master and media servers in Linux OS. ( NB 7.1 can also be used but for testing purpose I used NB 7.5)2. Install MEB 3.8 also in Linux OS.3. Install NB admin console in your windows desktop and configure the NB master server from there. Note: Ensure that you have root user permission to install NetBackup. Configuration Steps for MEB and NB Once MEB and NB are installed, Ensure that NB is linked to MEB by specifying the library /usr/openv/netbackup/bin/libobk.so64 in the mysqlbackup command line using --sbt-lib-path. Configure the NB master server from windows console. That is configure the storage units by specifying the Storage unit name, Disk type, Media Server name etc.  Create NetBackup policies that are user selectable. But please make sure that policy type is "Oracle".  Define the clients where MEB will be executed. Some times this will be different host where MEB is run or some times in same Media server where NB and tapes are attached. Now once the installation and configuration steps are performed for MEB and NB, the next part is the actual execution.MEB should be run as single file backup using --backup-image option with prefix sbt:(it is a tag which tells MEB that it should stream the backup image through the SBT interface) which is sent to NB client via SBT interface . The resulting backup image is stored where NB stores the images that it backs up. The following diagram shows how MEB interacts with MMS through SBT interface. Backup The following parameters should also be ready for the execution,    --sbt-lib-path : Path to SBT library specific to NetBackup MMS. SBT lib for NetBackup  is in /usr/openv/netbackup/bin/libobk.so64    --sbt-environment: Environment variables must be defined specific to NetBackup. In our example below, we use     NB_ORA_SERV=myserver.com,    NB_ORA_CLIENT=myserver.com,    NB_ORA_POLICY=NBU-MEB    ORACLE_HOME = /export/home2/tmp/hema/mysql-server/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --port=13000 --protocol=tcp --user=root --backup-image=sbt:bkpsbtNB --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64 --sbt-environment="NB_ORA_SERV=myserver.com, NB_ORA_CLIENT=myserver.com, NB_ORA_POLICY=NBU-MEB, ORACLE_HOME=/export/home2/tmp/hema/mysql-server/” --backup-dir=/export/home2/tmp/hema/MEB_bkdir/ backup-to-image ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Once backup is completed successfully, this should appear in Activity Monitor in NetBackup Console.For restore,  image contents has to be extracted using image-to-backup-dir command and then apply-log and copy-back steps are applied. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64  --backup-dir=/export/home2/tmp/hema/NBMEB/ --backup-image=sbt:bkpsbtNB image-to-backup-dir-----------------------------------------------------------------------------------------------------------------------------------Now apply logs as usual, shutdown the server and perform restore, restart the server and check the data contents. ./mysqlbackup   ---backup-dir=/export/home2/tmp/hema/NBMEB/  apply-log ./mysqlbackup --datadir=/export/home2/tmp/hema/mysql-server/mysql-5.5-meb-repo/mysql-test/var/mysqld.1/data/  --backup-dir=/export/home2/tmp/hema/MEB_bkpdir/ innodb_log_files_in_group=2 --innodb_log_file_size=5M --user=root --port=13000 --protocol=tcp copy-back The NB console should show 'Restore" job as done. If you don't see that there is something wrong with MEB or NetBackup.You can also refer to more detailed steps of MEB and NB integration in whitepaper here

    Read the article

  • Speeding up procedural texture generation

    - by FalconNL
    Recently I've begun working on a game that takes place in a procedurally generated solar system. After a bit of a learning curve (having neither worked with Scala, OpenGL 2 ES or Libgdx before), I have a basic tech demo going where you spin around a single procedurally textured planet: The problem I'm running into is the performance of the texture generation. A quick overview of what I'm doing: a planet is a cube that has been deformed to a sphere. To each side, a n x n (e.g. 256 x 256) texture is applied, which are bundled in one 8n x n texture that is sent to the fragment shader. The last two spaces are not used, they're only there to make sure the width is a power of 2. The texture is currently generated on the CPU, using the updated 2012 version of the simplex noise algorithm linked to in the paper 'Simplex noise demystified'. The scene I'm using to test the algorithm contains two spheres: the planet and the background. Both use a greyscale texture consisting of six octaves of 3D simplex noise, so for example if we choose 128x128 as the texture size there are 128 x 128 x 6 x 2 x 6 = about 1.2 million calls to the noise function. The closest you will get to the planet is about what's shown in the screenshot and since the game's target resolution is 1280x720 that means I'd prefer to use 512x512 textures. Combine that with the fact the actual textures will of course be more complicated than basic noise (There will be a day and night texture, blended in the fragment shader based on sunlight, and a specular mask. I need noise for continents, terrain color variation, clouds, city lights, etc.) and we're looking at something like 512 x 512 x 6 x 3 x 15 = 70 million noise calls for the planet alone. In the final game, there will be activities when traveling between planets, so a wait of 5 or 10 seconds, possibly 20, would be acceptable since I can calculate the texture in the background while traveling, though obviously the faster the better. Getting back to our test scene, performance on my PC isn't too terrible, though still too slow considering the final result is going to be about 60 times worse: 128x128 : 0.1s 256x256 : 0.4s 512x512 : 1.7s This is after I moved all performance-critical code to Java, since trying to do so in Scala was a lot worse. Running this on my phone (a Samsung Galaxy S3), however, produces a more problematic result: 128x128 : 2s 256x256 : 7s 512x512 : 29s Already far too long, and that's not even factoring in the fact that it'll be minutes instead of seconds in the final version. Clearly something needs to be done. Personally, I see a few potential avenues, though I'm not particularly keen on any of them yet: Don't precalculate the textures, but let the fragment shader calculate everything. Probably not feasible, because at one point I had the background as a fullscreen quad with a pixel shader and I got about 1 fps on my phone. Use the GPU to render the texture once, store it and use the stored texture from then on. Upside: might be faster than doing it on the CPU since the GPU is supposed to be faster at floating point calculations. Downside: effects that cannot (easily) be expressed as functions of simplex noise (e.g. gas planet vortices, moon craters, etc.) are a lot more difficult to code in GLSL than in Scala/Java. Calculate a large amount of noise textures and ship them with the application. I'd like to avoid this if at all possible. Lower the resolution. Buys me a 4x performance gain, which isn't really enough plus I lose a lot of quality. Find a faster noise algorithm. If anyone has one I'm all ears, but simplex is already supposed to be faster than perlin. Adopt a pixel art style, allowing for lower resolution textures and fewer noise octaves. While I originally envisioned the game in this style, I've come to prefer the realistic approach. I'm doing something wrong and the performance should already be one or two orders of magnitude better. If this is the case, please let me know. If anyone has any suggestions, tips, workarounds, or other comments regarding this problem I'd love to hear them.

    Read the article

  • Rules and advice for logging?

    - by Nick Rosencrantz
    In my organization we've put together some rules / guildelines about logging that I would like to know if you can add to or comment. We use Java but you may comment in general about loggin - rules and advice Use the correct logging level ERROR: Something has gone very wrong and need fixing immediately WARNING: The process can continue without fixing. The application should tolerate this level but the warning should always get investigated. INFO: Information that an important process is finished DEBUG. Is only used during development Make sure that you know what you're logging. Avoid that the logging influences the behavior of the application The function of the logging should be to write messages in the log. Log messages should be descriptive, clear, short and concise. There is not much use of a nonsense message when troubleshooting. Put the right properties in log4j Put in that the right method and class is written automatically. Example: Datedfile -web log4j.rootLogger=ERROR, DATEDFILE log4j.logger.org.springframework=INFO log4j.logger.waffle=ERROR log4j.logger.se.prv=INFO log4j.logger.se.prv.common.mvc=INFO log4j.logger.se.prv.omklassning=DEBUG log4j.appender.DATEDFILE=biz.minaret.log4j.DatedFileAppender log4j.appender.DATEDFILE.layout=org.apache.log4j.PatternLayout log4j.appender.DATEDFILE.layout.ConversionPattern=%d{HH:mm:ss,SSS} %-5p [%C{1}.%M] - %m%n log4j.appender.DATEDFILE.Prefix=omklassning. log4j.appender.DATEDFILE.Suffix=.log log4j.appender.DATEDFILE.Directory=//localhost/WebSphereLog/omklassning/ Log value. Please log values from the application. Log prefix. State which part of the application it is that the logging is written from, preferably with something for the project agreed prefix e.g. PANDORA_DB The amount of text. Be careful so that there is not too much logging text. It can influence the performance of the app. Loggning format: -There are several variants and methods to use with log4j but we would like a uniform use of the following format, when we log at exceptions: logger.error("PANDORA_DB2: Fel vid hämtning av frist i TP210_RAPPORTFRIST", e); In the example above it is assumed that we have set log4j properties so that it automatically write the class and the method. Always use logger and not the following: System.out.println(), System.err.println(), e.printStackTrace() If the web app uses our framework you can get very detailed error information from EJB, if using try-catch in the handler and logging according to the model above: In our project we use this conversion pattern with which method and class names are written out automatically . Here we use two different pattents for console and for datedfileappender: log4j.appender.CONSOLE.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n log4j.appender.DATEDFILE.layout.ConversionPattern=%d [%t] %-5p %c - %m%n In both the examples above method and class wioll be written out. In the console row number will also be written our. toString() Please have a toString() for every object. EX: @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append(" DwfInformation [ "); sb.append("cc: ").append(cc); sb.append("pn: ").append(pn); sb.append("kc: ").append(kc); sb.append("numberOfPages: ").append(numberOfPages); sb.append("publicationDate: ").append(publicationDate); sb.append("version: ").append(version); sb.append(" ]"); return sb.toString(); } instead of special method which make these outputs public void printAll() { logger.info("inbet: " + getInbetInput()); logger.info("betdat: " + betdat); logger.info("betid: " + betid); logger.info("send: " + send); logger.info("appr: " + appr); logger.info("rereg: " + rereg); logger.info("NY: " + ny); logger.info("CNT: " + cnt); } So is there anything you can add, comment or find questionable with these ways of using the logging? Feel free to answer or comment even if it is not related to Java, Java and log4j is just an implementation of how this is reasoned.

    Read the article

  • Integrating with Fusion Applications using SOAP web services and REST APIs (Part 1 of 2) by Arvind Srinivasamoorthy

    - by JuergenKress
    Fusion Applications provides several types of interfaces to facilitate integration with other applications within the enterprise and on the cloud.As one of the key integration interfaces, Fusion Applications (FA) supports SOAP services based integration, both inbound and outbound. At this point FA doesn’t provide REST API’s but it is planned for a future release. It is however possible to invoke external REST APIs from FA which we will discuss. Oracle continues to invest in improving both SOAP and REST based connectivity. The content in this blog is based on features that were available at the time of writing it. In this two part blog, I will cover the following topics briefly. Invoking FA SOAP web services from external applications Identifying the FA SOAP web service to be invoked Sample invocation from an external application Techniques to invoke FA services from an ADF application Invoking external SOAP Web Services from FA (covered in Part 2) Invoking external REST APIs from FA (covered in Part 2) I’ll touch upon some basics, so that you can quickly build a few SOAP/REST interactions with FA. If you do not already have access to an FA instance (on-premise or SaaS), you can request for a free 30 day trial of the Oracle Sales Cloud using http://cloud.oracle.com 1. Invoking FA SOAP web services from external applications There are two main types of services that FA exposes -  ADF Services - These services allow you to perform CRUD operations on Fusion business objects. For example, Sales Party Service, Opportunity Service etc. Using these services you can typically perform operations such as get, find, create, delete, update etc on FA objects.These services are typically useful for UI driven integrations such as looking up FA information from external application UIs, using third party Interfaces to create/update data in FA. They are also used in non-UI driven integration uses cases such as initial upload of business or setup data, synchronizing data with an external systems, etc. - Composite Services – These services involve more logic than CRUD and often involving human workflows, rules etc. These services perform a business function such as Get Orchestration Order Service and are used when building larger process based integrations with external systems.These services are usually asynchronous in nature and are not typically used for UI integration patterns. 1a. Identifying the FA SOAP web service to be invoked All FA web service metadata is available through an OER instance (Oracle Enterprise Repository) which is publicly available via http://fusionappsoer.oracle.com. This is the starting point for you to discover the services that you are going to work with. You do not need to own a FA account to browse the services using the above UI You can use the search area on the left to narrow down your search to what you are looking for. For example, you can choose the type as by ADF Services or Composite, you can narrow your search to a specific FA version, Product Family etc. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: AppAdvantage,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress,Arvind Srinivasamoorthy

    Read the article

  • Problem with script substitution when running script

    - by tucaz
    Hi! I'm new to Linux so this probably should be an easy fix, but I cannot see it. I have a script downloaded from official sources that is used to install additional tools for fsharp but it gives me a syntax error when running it. I tried to replace ( and ) by { and } but eventually it lead me to another error so I think this is not the problem since the script works for everybody. I read some articles that say that my bash version maybe is not the right one. I'm using Ubuntu 10.10 and here is the error: install-bonus.sh: 28: Syntax error: "(" unexpected (expecting "}") And this is line 27, 28 and 29: { declare -a DIRS=("${!3}") FILE=$2 And the full script: #! /bin/sh -e PREFIX=/usr BIN=$PREFIX/bin MAN=$PREFIX/share/man/man1/ die() { echo "$1" &2 echo "Installation aborted." &2 exit 1 } echo "This script will install additional material for F# including" echo "man pages, fsharpc and fsharpi scripts and Gtk# support for F#" echo "Interactive (root access needed)" echo "" # ------------------------------------------------------------------------------ # Utility function that searches specified directories for a specified file # and if the file is not found, it asks user to provide a directory RESULT="" searchpaths() { declare -a DIRS=("${!3}") FILE=$2 DIR=${DIRS[0]} for TRYDIR in ${DIRS[@]} do if [ -f $TRYDIR/$FILE ] then DIR=$TRYDIR fi done while [ ! -f $DIR/$FILE ] do echo "File '$FILE' was not found in any of ${DIRS[@]}. Please enter $1 installation directory:" read DIR done RESULT=$DIR } # ------------------------------------------------------------------------------ # Locate F# installation directory - this is needed, because we want to # add environment variable with it, generate 'fsharpc' and 'fsharpi' and also # copy load-gtk.fsx to that directory # ------------------------------------------------------------------------------ PATHS=( $1 /usr/lib/fsharp /usr/lib/shared/fsharp ) searchpaths "F# installation" FSharp.Core.dll PATHS[@] FSHARPDIR=$RESULT echo "Successfully found F# installation directory." # ------------------------------------------------------------------------------ # Check that we have everything we need # ------------------------------------------------------------------------------ [ $(id -u) -eq 0 ] || die "Please run the script as root." which mono /dev/null || die "mono not found in PATH." # ------------------------------------------------------------------------------ # Make sure that all additional assemblies are in GAC # ------------------------------------------------------------------------------ echo "Installing additional F# assemblies to the GAC" gacutil -i $FSHARPDIR/FSharp.Build.dll gacutil -i $FSHARPDIR/FSharp.Compiler.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Interactive.Settings.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Server.Shared.dll # ------------------------------------------------------------------------------ # Install additional files # ------------------------------------------------------------------------------ # Install man pages echo "Installing additional F# commands, scripts and man pages" mkdir -p $MAN cp *.1 $MAN # Export the FSHARP_COMPILER_BIN environment variable if [[ ! "$OSTYPE" =~ "darwin" ]]; then echo "export FSHARP_COMPILER_BIN=$FSHARPDIR" fsharp.sh mv fsharp.sh /etc/profile.d/ fi # Generate 'load-gtk.fsx' script for F# Interactive (ask user if we cannot find binaries) PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gtk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gtk#" gtk-sharp.dll PATHS[@] GTKDIR=$RESULT echo "Successfully found Gtk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/glib-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Glib" glib-sharp.dll PATHS[@] GLIBDIR=$RESULT echo "Successfully found Glib# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/atk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Atk#" atk-sharp.dll PATHS[@] ATKDIR=$RESULT echo "Successfully found Atk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gdk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gdk#" gdk-sharp.dll PATHS[@] GDKDIR=$RESULT echo "Successfully found Gdk# root directory." cp bonus/load-gtk.fsx load-gtk1.fsx sed "s,INSERTGTKPATH,$GTKDIR,g" load-gtk1.fsx load-gtk2.fsx sed "s,INSERTGDKPATH,$GDKDIR,g" load-gtk2.fsx load-gtk3.fsx sed "s,INSERTATKPATH,$ATKDIR,g" load-gtk3.fsx load-gtk4.fsx sed "s,INSERTGLIBPATH,$GLIBDIR,g" load-gtk4.fsx load-gtk.fsx rm load-gtk1.fsx rm load-gtk2.fsx rm load-gtk3.fsx rm load-gtk4.fsx mv load-gtk.fsx $FSHARPDIR/load-gtk.fsx # Generate 'fsharpc' and 'fsharpi' scripts (using the F# path) # 'fsharpi' automatically searches F# root directory (e.g. load-gtk) echo "#!/bin/sh" fsharpc echo "exec mono $FSHARPDIR/fsc.exe --resident \"\$@\"" fsharpc chmod 755 fsharpc echo "#!/bin/sh" fsharpi echo "exec mono $FSHARPDIR/fsi.exe -I:\"$FSHARPDIR\" \"\$@\"" fsharpi chmod 755 fsharpi mv fsharpc $BIN/fsharpc mv fsharpi $BIN/fsharpi Thanks a lot!

    Read the article

  • Finding the maximum value/date across columns

    - by AtulThakor
    While working on some code recently I discovered a neat little trick to find the maximum value across several columns….. So the starting point was finding the maximum date across several related tables and storing the maximum value against an aggregated record. Here's the sample setup code: USE TEMPDB IF OBJECT_ID('CUSTOMER') IS NOT NULL BEGIN DROP TABLE CUSTOMER END IF OBJECT_ID('ADDRESS') IS NOT NULL BEGIN DROP TABLE ADDRESS END IF OBJECT_ID('ORDERS') IS NOT NULL BEGIN DROP TABLE ORDERS END SELECT 1 AS CUSTOMERID, 'FREDDY KRUEGER' AS NAME, GETDATE() - 10 AS DATEUPDATED INTO CUSTOMER SELECT 100000 AS ADDRESSID, 1 AS CUSTOMERID, '1428 ELM STREET' AS ADDRESS, GETDATE() -5 AS DATEUPDATED INTO ADDRESS SELECT 123456 AS ORDERID, 1 AS CUSTOMERID, GETDATE() + 1 AS DATEUPDATED INTO ORDERS .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Now the code used a function to determine the maximum date, this performed poorly. After considering pivoting the data I opted for a case statement, this seemed reasonable until I discovered other areas which needed to determine the maximum date between 5 or more tables which didn't scale well. The final solution involved using the value clause within a sub query as followed. SELECT C.CUSTOMERID, A.ADDRESSID, (SELECT MAX(DT) FROM (Values(C.DATEUPDATED),(A.DATEUPDATED),(O.DATEUPDATED)) AS VALUE(DT)) FROM CUSTOMER C INNER JOIN ADDRESS A ON C.CUSTOMERID = A.CUSTOMERID INNER JOIN ORDERS O ON O.CUSTOMERID = C.CUSTOMERID .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } As you can see the solution scales well and can take advantage of many of the aggregate functions!

    Read the article

  • E-Business Suite 12.1.3 Data Masking Certified with Enterprise Manager 12c

    - by Elke Phelps (Oracle Development)
    Following up on our prior announcement for EM 11g, we're pleased to announce the certification of the E-Business Suite 12.1.3 Data Masking Template for the Data Masking Pack with Enterprise Manager Cloud Control 12c. You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Grid Control 12c to scramble sensitive data in cloned E-Business Suite environments.  Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function.  You may scramble data in E-Business Suite cloned environments with EM12c using the following template: E-Business Suite 12.1.3 Data Masking Template for Data Masking Pack with EM12c (Patch 14407414) What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application. How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  The template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments. The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing.  What's new with EM 12c? Some of the execution steps may also be performed with EM Command Line Interface (EM CLI).  Support of EM CLI is a new feature with the E-Business Suite Release 12.1.3 template for EM 12c.  Is there a charge for this? Yes. You must purchase licenses for the Oracle Data Masking Pack plug-in. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. References Additional details and requirements are provided in the following My Oracle Support Note: Using Oracle E-Business Suite Release 12.1.3 Template for the Data Masking Pack with Oracle Enterprise Manager 12.1.0.2 Data Masking Tool (Note 1481916.1) Masking Sensitive Data in the Oracle Database Real Application Testing User's Guide 11g Release 2 (11.2) Related Articles Scrambling Sensitive Data in E-Business Suite

    Read the article

  • Customize SharePoint list using InfoPath2010 form Part4

    - by ybbest
    Customize SharePoint list using InfoPath2010 form Part1 Customize SharePoint list using InfoPath2010 form Part2 Customize SharePoint list using InfoPath2010 form Part3 In this post, I’d like to show you how to create print functionality in InfoPath for SharePoint list. The print functionality is provided out of box in InfoPath form library; however it is not available in SharePoint list. Here are the steps to create the print functionality.You can download the new form here. 1. Create print page in the list by first copy and paste the displayifs.aspx and rename the file to Printifs.aspx. 2. Open the page in the SharePoint designer and copy the following javascript to the PlaceHolderTitleAreaClass ContentPlaceHolder. <script type="text/javascript"> $(document).ready(function(){ $("[id^='Ribbon']").hide(); $(".s4-title").hide(); $("[id='s4-leftpanel']").hide(); $("[id='s4-ribbonrow']").hide(); $("[id='s4-titlerow']").hide(); $("[id='s4-titlerow']").css("height", "0px"); $("body").css("background-color", "white"); $("body").css("zoom", "135%"); $("[id='MSO_ContentTable']").css("margin-left", "0px"); $("[id='MT-BodyContent']").css("width", "900px"); $(".MT-BodyArea").css("width", "900px"); $("[id='MT-Layout']").css("width", "900px"); $(".ms-bodyareacell").css("width", "900px"); $(".s4-wpTopTable").css("border", "none"); $("[id$='XmlFormView']").css("margin-left", "-80px"); $("body").css("margin-top", "-30px"); $(":contains('CAPEX')").css("border", "5px solid #FFCC00"); window.print(); }); </script> 3. Open InfoPath form for the list and create a field called PrintLink 4. Set the default value of printLink that points to the print page I just created before with the query string id.You can download the formula for the default value here. 5. Add a new image that looks like Print button on the display view, then I can set the url to the Print link Field. (The reason I did not use button is that you cannot set the navigate url for the button). 6.Set the url of the image to the PrintLInk field. 7.Next , create the print view. 8. Copy the contents from the display view to print view 9. Finally, go to the printifs.aspx and edit the InfoPath web part to set the view to PrintView. 9. Republish you form you will see the form as shown below 10. If you click the Print button, you will see the print page and print dialog,you can also add the company logo in the print page using css as well. 11.To deploy the customization,you can use the backup and restore content database approach , you can get more details from my previous blog post here.

    Read the article

  • top Tweets SOA Partner Community &ndash; June 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Simone Geib Contact me directly for ideas how to improve http://bit.ly/advancedsoasuite and additional posts, presentations, white papers, #soasuite SOA CommunitySOA Community Newsletter May 2012 https://soacommunity.wordpress.com /2012/05/28/soa-community-newsletter-may-2012/ #soacommunity Simone Geib #soasuite advanced OTN page has become too cluttered. Broke it into separate pages to start with. http://bit.ly/advancedsoasuite SOA CommunitySOA Management with Enterprise Manager Cloud Control 12c and Business Transaction Management 12c Demo https://soacommunity.wordpress.com /2012/05/21/soa-management-with-enterprise-manager-cloud-control-12c-and-business-transaction-management-12c-demo/ #soacommunity OracleBlogs June Webcast: SOA Gateway Implementation and Troubleshooting (2 sessions) http://ow.ly/1kbRFA OTNArchBeatEvery cloud needs an SOA lining: analyst | @JoeMcKendrick http://zd.net/KTgMHk ServiceTechSymposium New session just posted to calendar: "NoSQL for Data Services, Data Virtualization & Big Data" by Guido Schmutz, Trivadis AG ://ow.ly/bjjOe OTNArchBeat?Every cloud needs an SOA lining: analyst | @JoeMcKendrick http://zd.net/KTgMHk Debra Lilley looks good - real proof people are using the apps ! RT @fteter:Very cool Fusion Applications Help site: http://bit.ly/L3nvOR #FusionApps OTNArchBeat How to Set JVM Parameters in Oracle SOA 11G | Francis Ip http://bit.ly/JBDYPj demed"rapid proliferation of cloud computing will drive convergence of SOA and cloud paradigms" http://ovum.com/2012/05/18/soa-paves-the-way-for-cloud/ SOA Community Sending out invitations to our advanced Fusion Middleware Summer Camps! Want to learn more register for the community http://www.oracle.com/goto/emea/soa SOA Community Middleware Oracle Excellence Awards 2012 - HAPPY NEW YEAR! https://soacommunity.wordpress.com/ 2012/05/31/middleware-oracle-excellence-awards-2012 happy-new-year/ #soacommunity #opn #opnaward #specialization #oracle Simone Geib #oraclesoa performance tuning resources. All in one: docs, blogs, WPs, ppts: http://bit.ly/soa_resources OracleBlogs Middleware Oracle Excellence Awards 2012 - HAPPY NEW YEAR! http://ow.ly/1k9ri0 ServiceTechSymposiumNew session just posted to Symposium calendar: "Service Modeling & BPM Business Value Patterns" by Jürgen Kress, Oracle http://www.servicetechsymposium.com/ agenda2012.php #service_modeling_and_bpm _business_value_patterns SOA Community Happy New Year #soacommunity thanks for the business! Time for a drink ;-) http://pic.twitter.com/zkK08KWB Jan van ZoggelUsing execute-sql() function for Name-Value pair lookups in Oracle Service Bus http://wp.me/p1H430-jZ SOA Community Middleware Oracle Excellence Awards 2012&ndash;HAPPY NEW YEAR! http://wp.me/p10C8u-q4 orclateamsoa A-Team Blog #ateam: BPM 11g Deployment & Instance Migration - I have seen a number of request lately asking how to http://ow.ly/1jZ0h8 OTNArchBeat Who should ‘own’ the Enterprise Architecture? | Michael Glas http://bit.ly/K0ge0Q Oracle UPK & Tutor TOMORROW! (June 23rd) - UPK Professional Webinar at Noon ET: Discover why user adoption is a key factor for the http://bit.ly/LjZjdx Sabine Leitner Finance Event im Design-Hotel beim Barbeque: 21. Juni FRA mit Kunden SV Informatik, Schufa, LBBW http://bit.ly/JtwE3v #Oracle @itevent OracleEnterpriseMgr SOA Management with Enterprise Manager Cloud Control 12c and Business Transaction Management 12c Demo http://ow.ly/b3WP1 #em12c ServiceTechSymposium New session just posted to Symposium calendar: "Elastic SOA in the Cloud" by Steve Millidge, C2B2 Consulting http://www.servicetechsymposium.com /agenda2012.php #elastic_soa_in_the_cloud OTNArchBeat Securing Heterogeneous Systems Using Oracle Web Services Manager by @rluttikhuizen & Jens Peters http://bit.ly/KjShFi Oracleteamsoa A-Team Blog #ateam: How to Set JVM Parameters in Oracle SOA 11G http://ow.ly/1k2cnl SOA Community Oracle Service Registry in an automated (Maven) SOA/BPM build http://redstack.wordpress.com /2012/05/22/using-oracle-service-registry-in-an-automated-maven-soabpm-build/ #soacommunity #redstack #soa #osr #opn SOA CommunityHigh demand for advanced Fusion Middleware Summer Camps! Want to learn more register for the #soacommunity http://www.oracle.com/goto/emea/soa OracleBlogs? How to Set JVM Parameters in Oracle SOA 11G http://ow.ly/1k1UTv SOA Community top Tweets SOA Partner Community &ndash; May 2012 http://wp.me/p10C8u-pP ServiceTechSymposium New session just posted to Symposium calendar: "SOA Governance at EDP: A Global Energy Company" by Manuel Rosa, Link http://www.servicetechsymposium.com/ agenda2012.php #soa_governance_at_edp For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity,twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • Partner Blog: Hub City Media Introduces iPad Application for Oracle Identity Analytics

    - by Tanu Sood
    About the Writer:Steve Giovannetti is CTO of Hub City Media, Inc., a company that specializes in implementation and product development on the Oracle Identity Management platform. Recently, Hub City Media announced the introduction of iPad application IdentityCert for Oracle Identity Analytics. This post explore the business use cases and application of IdentityCert.Hub City Media(HCM) has been deploying certification solutions based on Oracle Identity Analytics since it first appeared on the market as Vaau RBACx. With each deployment we've seen the same pattern repeat time and time again:1. Customers suffering under the weight of manual access certification regimens deploy Oracle Identity Analytics (OIA) for automated certification. 2. OIA improves the frequency, speed, accuracy, and participation of certifications across the organization. 3. Then the certifiers, typically managers and supervisors, ask, “Is there any easier way to do these certifications offline?”The current version of OIA has a way to export certification data to a spreadsheet.  For some customers, we've leveraged this feature and combined it with some of our own custom code to provide a solution based on spreadsheet exports and imports.  Customers export the certification to Microsoft Excel, complete it, and then import the spreadsheet to OIA. It worked well for offline certification, but if the user accidentally altered the format of the spreadsheet, the import of the data could fail. We were close to a solution but it wasn’t reliable.Over the past few years, we've seen the proliferation of Apple iOS devices, specifically the iPhone and iPad, in the enterprise.  As our customers were asking for offline certification, we noticed the same population of users traditionally responsible for access certification, were early adopters of the iPad. The environment seemed ideal for us to create an iPad application to support offline certifications using Oracle Identity Analytics. That’s why we created IdentityCert™.IdentityCert allows users to view their analytics dashboard, complete user certifications, and resolve policy violations with OIA, from their iPads.The current IdentityCert analytics dashboard displays the same charts that are available in the Oracle Identity Analytics product. However, we plan to expand the number of available analytics in future releases.The main function of IdentityCert is user certification which can be performed quickly and efficiently using a simple touch interface. Managers tap into a certification, use simple gestures to claim users and certify their access.  Certifications can be securely downloaded to IdentityCert and can be completed with or without a network connection. The user can upload the completed certifications once they are connected to a cellular or wi-fi network.Oracle Identity Analytics can generate policy violation notifications based on detective scans of identity warehouse or via preventative analysis of identity access requests. IdentityCert allows users to view all policy violations, resolve, or delegate them to appropriate users. IdentityCert also analyzes the policy violation expression and produces more human friendly descriptions of the policy violation which improves the ability of users to resolve the violation. IdentityCert can be deployed quickly into a customer's environment. It is deployed with Hub City Media's ID Services to connect Oracle Identity Analytics securely with the iPad application.Oracle Identity Management 11g R2 is an important evolutionary release. Oracle's Identity Management suite has more characteristics of a cohesive platform. This platform provides an integrated set of identity services that can be used to protect, manage, and audit security within the enterprise. At HCM we take the platform concept a step further and see it as an opportunity to create unique solutions for Oracle Identity Management customers. IdentityCert is our commitment to this platform. You can download IdentityCert from the Apple iOS App Store today. It includes a demo dataset that you can use to explore the functions of the product without any server infrastructure. Download it. Give it a try. We would appreciate your interest and welcome any feedback.Resources:Press Release: Hub City Media Introduces iPad Application IdentityCert™ for Oracle Identity AnalyticsApp Store Download: http://bit.ly/IdentityCertOracle Identity Governance Suite

    Read the article

  • How to raycast select a scaled OBB?

    - by user3254944
    I have the OBB picking code to select an OBB with code inspired from Real time Rendering 3 and opengl-tutorial.org. I can successfully select objects that have been moved or rotated. However, I cant correctly select an object that has been scaled. The bounding box scales right, but the I can only select the object in a thin strip on its center. How do I fix the checkForHits() function to allow it to read the scaling that I passed to it in the raycast matrix? void GLWidget::selectObjRaycast() { glm::vec2 mouse = (glm::vec2(mousePos.x(), mousePos.y()) / glm::vec2(this->width(), this->height())) * 2.0f - 1.0f; mouse.y *= -1; glm::mat4 toWorld = glm::inverse(ProjectionM * ViewM); glm::vec4 from = toWorld * glm::vec4(mouse, -1.0f, 1.0f); glm::vec4 to = toWorld * glm::vec4(mouse, 1.0f, 1.0f); from /= from.w; to /= to.w; fromAABB = glm::vec3(from); toAABB = glm::normalize(glm::vec3(to - from)); checkForHits(); } void GLWidget::checkForHits() { for (int i = 0; i < myWin.myEtc->allObj.size(); ++i) //check for hits on each obj's bb { bool miss = 0; float tMin = 0.0f; float tMax = 100000.0f; glm::vec3 bbPos(myWin.myEtc->allObj[i]->raycastM[3].x, myWin.myEtc->allObj[i]->raycastM[3].y, myWin.myEtc->allObj[i]->raycastM[3].z); glm::vec3 delta = bbPos - fromAABB; for (int j = 0; j < 3; ++j) { glm::vec3 axis(myWin.myEtc->allObj[i]->raycastM[j].x, myWin.myEtc->allObj[i]->raycastM[j].y, myWin.myEtc->allObj[i]->raycastM[j].z); float e = glm::dot(axis, delta); float f = glm::dot(toAABB, axis); if (fabs(f) > 0.001f) { float t1 = (e + myWin.myEtc->allObj[i]->bbMin[j]) / f; float t2 = (e + myWin.myEtc->allObj[i]->bbMax[j]) / f; if (t1 > t2) { float w = t1; t1 = t2; t2 = w; } if (t2 < tMax) tMax = t2; if (t1 > tMin) tMin = t1; if (tMax < tMin) miss = 1; } else { if (-e + myWin.myEtc->allObj[i]->bbMin[j] > 0.0f || -e + myWin.myEtc->allObj[i]->bbMax[j] < 0.0f) miss = 1; } } if (miss == 0) { intersection_distance = tMin; myWin.myEtc->sel.push_back(myWin.myEtc->allObj[i]); myWin.myEtc->allObj[i]->highlight = myWin.myGLHelp->highlight; break; } } } void Object::render(glm::mat4 PV) { scaleM = glm::scale(glm::mat4(), s->val_3); r_quat = glm::quat(glm::radians(r->val_3)); rotationM = glm::toMat4(r_quat); translationM = glm::translate(glm::mat4(), t->val_3); transLocal1M = glm::translate(glm::mat4(), -rsPivot->val_3); transLocal2M = glm::translate(glm::mat4(), rsPivot->val_3); raycastM = translationM * transLocal2M * rotationM * scaleM * transLocal1M; // MVP = PV * translationM * transLocal2M * rotationM * scaleM * transLocal1M; }

    Read the article

< Previous Page | 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281  | Next Page >