Search Results

Search found 11735 results on 470 pages for 'global variables'.

Page 196/470 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Introducing sp_ssiscatalog (v1.0.0.0)

    - by jamiet
    Regular readers of my blog may know that over the last year I have made available a suite of SQL Server Reporting Services (SSRS) reports that provide visualisations of the data in the SQL Server Integration Services (SSIS) 2012 Catalog. Those reports are available at http://ssisreportingpack.codeplex.com. As I have built these reports and used them myself on a real life project a couple of things have dawned on me: As soon as your SSIS Catalog gets a significant amount of data in it the performance of the reports degrades rapidly. This is hampered by the fact that there are limitations as to the SQL statements that I can embed within a SSRS report. SSIS professionals are data guys at heart and those types of people feel more comfortable in a query environment rather than having to go through the rigmarole of standing up a reporting server (well, I know I do anyway) Hence I have decided to take a different tack with the reporting pack. Taking my lead from Adam Machanic’s sp_whoisactive and Brent Ozar’s sp_blitz I have produced sp_ssiscatalog, a stored procedure that makes it easy to get at the crucial data in the SSIS Catalog. I will spend the rest of this blog explaining exactly what sp_ssiscatalog does and how to use it but if you would rather just download the bits yourself and start to play you can download v1.0.0.0 from DB v1.0.0.0. Usage Scenarios Most Recent Execution I find that the most frequent information that one needs to get from the SSIS Catalog is information pertaining to the most recent execution. Hence if you execute sp_ssiscatalog with no parameters, that is exactly what you will get. EXEC [dbo].[sp_ssiscatalog] This will return up to 5 resultsets: EXECUTION - Summary information about the execution including status, start time & end time EVENTS - All events that occurred during the execution OnError,OnTaskFailed - All events where event_name is either OnError or OnTaskFailed OnWarning - All events where event_name is OnWarning EXECUTABLE_STATS - Duration and execution result of every executable in the execution All 5 resultsets will be displayed if there is any data satisfying that resultset. In other words, if there are no (for example) OnWarning events then the OnWarning resultset will not be displayed. The display of these 5 resultsets can be toggled respectively by these 5 optional parameters (all of which are of type BIT): @exec_execution @exec_events @exec_errors @exec_warnings @exec_executable_stats Any Execution As just explained the default behaviour is to supply data for the most recent execution. If you wish to specify which execution the data should return data for simply supply the execution_id as a parameter: EXEC [dbo].[sp_ssiscatalog] 6 All Executions sp_ssiscatalog can also return information about all executions: EXEC [dbo].[sp_ssiscatalog] @operation_type='execs' The most recent execution will appear at the top. sp_ssiscatalog provides a number of parameters that enable you to filter the resultset: @execs_folder_name @execs_project_name @execs_package_name @execs_executed_as_name @execs_status_desc Some typical usages might be: //Return all failed executions EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_status_desc='failed' //Return all executions for a specified folder EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_folder_name='My folder' //Return all executions of a specified package in a specified project EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_project_name='My project', @execs_package_name='Pkg.dtsx' Installing sp_ssicatalog Under the covers sp_ssiscatalog actually calls many other stored procedures and functions hence creating it on your server is not simply a case of running a CREATE PROCEDURE script. I maintain the code in an SQL Server Data Tools (SSDT) database project which means that you have two ways of obtaining it. Download the source code You can download the latest (at the time of writing) source code from http://ssisreportingpack.codeplex.com/SourceControl/changeset/view/70192. Hit the download button to download all the source code in a zip file. The contents of that zip file will include an SSDT database project which you can open up in SSDT and publish just like any other SSDT database project. You can publish to a new database or any existing database, even [SSISDB] if you prefer. Download a dacpac Maintaining the code in an SSDT database project means that it can all get packaged up into a dacpac that you can then publish to your SQL Server. That dacpac is available from DB v1.0.0.0: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. Hence, we can use the command-line tool, sqlpackage.exe, instead. Don’t worry if reverting to the command-line sounds a little daunting, I assure you it is not. Simply open a Visual Studio command-prompt and cd to the folder containing the downloaded dacpac: Type: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /action:Publish /TargetDatabaseName:SsisReportingPack /SourceFile:SSISReportingPack.dacpac /Variables:SSISDB=SSISDB /TargetServerName:(local) or the shortened form: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) remembering to set your server name appropriately (here mine is set to “(local)” ). If everything works successfully you will see this: And you’re done! You’ll have a new database called [SsisReportingPack] which contains sp_ssiscatalog:   Good luck with sp_ssiscatalog. I have been using it extensively on my own projects recently and it has proved to be very useful indeed. Rest-assured however, I will be adding many new capabilities in the future. Feedback is welcome. @Jamiet

    Read the article

  • Anti-cheat Javascript for browser/HTML5 game

    - by Billy Ninja
    I'm planning on venturing on making a single player action rpg in js/html5, and I'd like to prevent cheating. I don't need 100% protection, since it's not going to be a multiplayer game, but I want some level of protection. So what strategies you suggest beyond minify and obfuscation? I wouldn't bother to make some server side simple checking, but I don't want to go the Diablo 3 path keeping all my game state changes on the server side. Since it's going to be a rpg of sorts I came up with the idea of making a stats inspector that checks abrupt changes in their values, but I'm not sure how it consistent and trusty it can be. What about variables and functions escopes? Working on smaller escopes whenever possible is safer, but it's worth the effort? Is there anyway for the javascript to self inspect it's text, like in a checksum? There are browser specific solutions? I wouldn't bother to restrain it for Chrome only in the early builds.

    Read the article

  • How *not* to handle a compensation step on failure in an SSIS package

    - by James Luetkehoelter
    Just stumbed across this where I'm working. Someone created a global error handler for a package that included this SQL step: DELETE FROM Table WHERE DateDiff(MI, ExportedDate, GetDate()) < 5 So if the package runs for longer than 5 minutes and fails, nothing gets cleaned up. Please people, don't do this... Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Brazil Identity Customer Forum a Huge Success

    - by Tanu Sood
    As we continue to execute on the global Identity Management 11gR2 launch event series, if the success of the Brazil event is any indication, the London event coming up on October 24th will be a blowout! These events provide a unique opportunity to hear directly from and network with existing (and successful) Oracle Identity Manaagement customers, as well as connect directly with product & technology experts. The Identity Forum agenda includes presentation from product experts on the latest release of Oracle Identity Management, followed by live product demonstration and local customer presentations or panel discussions with both customers and implementation partners. The very successful launch event in Brazil concluded yesterday. Here are some pictures from the event. Want to be part of the identity Customer Forum? Then do connect with your local Oracle representative or let us know via this blog or @oracleidm. We hope to see you soon at an event near you.  

    Read the article

  • Arrow ECS: VAD mit Weitblick

    - by A&C Redaktion
    Die Arrow ECS unterstützt Oracle Partner dabei, sich dauerhaft erfolgreich zu etablieren. Als Value Added Distributor, kurz VAD, für das Oracle Soft- und Hardware Portfolio bietet Arrow wertvolle Mehrwertdienstleistungen für Partner an, etwa in den Bereichen Consulting, Vertrieb und Produktmarketing. Der Vorteil: Die Partner können sich voll auf ihr Kerngeschäft konzentrieren. Wie die Zusammenarbeit genau funktioniert, erklären Martin Wilhelm, Manager Business Unit Enterprise Solutions, Herbert Varga vom Product Management und die Sales-Expertin für Oracle Produkte, Maria Keller, im Video. Arrow ECS steht für kompetente und zuverlässige Zusammenarbeit mit dem Partner und wurde bereits mehrfach zum Oracle Global Value Added Distributor des Jahres gekürt

    Read the article

  • Arrow ECS: VAD mit Weitblick

    - by A&C Redaktion
    Die Arrow ECS unterstützt Oracle Partner dabei, sich dauerhaft erfolgreich zu etablieren. Als Value Added Distributor, kurz VAD, für das Oracle Soft- und Hardware Portfolio bietet Arrow wertvolle Mehrwertdienstleistungen für Partner an, etwa in den Bereichen Consulting, Vertrieb und Produktmarketing. Der Vorteil: Die Partner können sich voll auf ihr Kerngeschäft konzentrieren. Wie die Zusammenarbeit genau funktioniert, erklären Martin Wilhelm, Manager Business Unit Enterprise Solutions, Herbert Varga vom Product Management und die Sales-Expertin für Oracle Produkte, Maria Keller, im Video. Arrow ECS steht für kompetente und zuverlässige Zusammenarbeit mit dem Partner und wurde bereits mehrfach zum Oracle Global Value Added Distributor des Jahres gekürt

    Read the article

  • Upgrading to Gnome 3.4 breaks Unity and gnome-shell

    - by mac
    I have upgraded my gnome shell to 3.4 in Ubuntu 11.10 through sudo add-apt-repository ppa:ricotz/testing sudo add-apt-repository ppa:gnome3-team/gnome3 sudo apt-get update && sudo apt-get dist-upgrade sudo apt-get install gnome-shell But it broke my system. Gnome shell is completely broken - When I login it just shows desktop wallpaper and nothing else. And importantly Unity is also broken. Attaching the screenshot Some main issues 1)Two menus are appearing now - Global menu as well as application menu 2)Icons on top-right panel are appearing weirdly 3)My Default Ambiance Theme also got screwed. Instead of black color menus, I am seeing white color menus. How do I fix them? Or Do I have an option to revert back to original settings or will reinstalling Unity/Gnome Shell helps ?

    Read the article

  • Configure IPv6 on your Linux system (Ubuntu)

    After the presentation on IPv6 at the first event of the Emtel Knowledge Series and some recent discussion on social media networks with other geeks and Linux interested IT people here in Mauritius, I thought that I should give it a try (finally) and tweak my local network infrastructure. Honestly, I have been to busy with contractual project work and it never really occurred to me to set up IPv6 in my LAN. Well, the following paragraphs are going to shed some light on those aspects of modern computer and network technology. This is the first article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Let's embrace IPv6 The basic configuration on Linux is actually very simple as the kernel, operating system, and user-space programs support that protocol natively. If your system is ready to go for IP (aka: IPv4), then you are good to go for anything else. At least, I didn't have to install any additional packages on my system(s). We are going to assign a static IPv6 address to the system. Hence, we have to modify the definition of interfaces and check whether we have an inet6 entry specified. Open your favourite text editor and check the following entries (it should be at least similar to this): $ sudo nano /etc/network/interfaces auto eth0# IPv4 configurationiface eth0 inet static  address 192.168.1.2  network 192.168.1.0  netmask 255.255.255.0  broadcast 192.168.1.255# IPv6 configurationiface eth0 inet6 static  pre-up modprobe ipv6  address 2001:db8:bad:a55::2  netmask 64 Of course, you might have to adjust your interface device (eth0) or you might be interested to have multiple directives for additional devices (eth1, eth2, etc.). The auto instruction takes care that your device is enabled and configured during the booting phase. The use of the pre-up directive depends on your kernel configuration but in most scenarios this might be an optional line. Anyways, it doesn't hurt to have it enabled after all - just to be on the safe side. Next, either restart your network subsystem like so: $ sudo service networking restart Or you might prefer to do it manually with identical parameters, like so: $ sudo ifconfig eth0 inet6 add 2001:db8:bad:a55::2/64 In case that you're logged in remotely into your PC (ie. via ssh), it is highly advised to opt for the second choice and add the device manually. You can check your configuration afterwards with one of the following commands (depends on whether it is installed): $ sudo ifconfig eth0eth0      Link encap:Ethernet  HWaddr 00:21:5a:50:d7:94            inet addr:192.168.160.2  Bcast:192.168.160.255  Mask:255.255.255.0          inet6 addr: fe80::221:5aff:fe50:d794/64 Scope:Link          inet6 addr: 2001:db8:bad:a55::2/64 Scope:Global          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 $ sudo ip -6 address show eth03: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000    inet6 2001:db8:bad:a55::2/64 scope global        valid_lft forever preferred_lft forever    inet6 fe80::221:5aff:fe50:d794/64 scope link        valid_lft forever preferred_lft forever In both cases, it confirms that our network device has been assigned a valid IPv6 address. That's it in general for your setup on one system. But of course, you might be interested to enable more services for IPv6, especially if you're already running a couple of them in your IP network. More details are available on the official Ubuntu Wiki. Continue to configure your network to provide IPv6 address information automatically in your local infrastructure.

    Read the article

  • Uncheck Radio Button on Double Click

    - by Rajneesh Verma
    Hi, Recently I got one requirement that i have to uncheck radio button list when a user double click it (Try to uncheck). I did this using JAVA Script. Below is the code. Designer: 1: < head runat ="server" > 2: < title > :Radio Button List Demo: </ title > 3: < script language ="javascript" type ="text/javascript" > 1:   2: //Global variable to store selectedvalue 3: var lastchecked = "" ; 4:   5: function rblSelectedValue...(read more)

    Read the article

  • BREAKING NEWS: Bunny Inc. becomes a Social Enterprise

    - by kellsey.ruppel(at)oracle.com
    Bunny what? Is your business adaptive, agile, innovative, productive… profitable? No? Wondering how to make it so?Social Enterprise is gaining ground as a global trend to accelerate business performance by better engaging employees, partners and customers.Starting with this post we are looking forward to stimulate an open conversation on the benefits, the stumbling blocks and the best practices of the Enterprise 2.0 journey… but with a bunny smile!Is Social Enterprise revolutionary or evolutionary? How does it impact traditional systems (such as ERP, CRM, BPM, Portals)? How do you measure it? How do you avoid major mistakes?We want to share our vision and to hear from you. Tell us what you did, what you are going to do and what you would never do with social and ... start looking for the invasion of the #e20bunnies at #webcenterJoin the discussion on LinkedIn! And follow the conversation on Twitter!Technorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, e20bunnies

    Read the article

  • BREAKING NEWS: Bunny Inc. becomes a Social Enterprise

    - by kellsey.ruppel(at)oracle.com
    Bunny what? Is your business adaptive, agile, innovative, productive… profitable? No? Wondering how to make it so?Social Enterprise is gaining ground as a global trend to accelerate business performance by better engaging employees, partners and customers.Starting with this post we are looking forward to stimulate an open conversation on the benefits, the stumbling blocks and the best practices of the Enterprise 2.0 journey… but with a bunny smile!Is Social Enterprise revolutionary or evolutionary? How does it impact traditional systems (such as ERP, CRM, BPM, Portals)? How do you measure it? How do you avoid major mistakes?We want to share our vision and to hear from you. Tell us what you did, what you are going to do and what you would never do with social and ... start looking for the invasion of the #e20bunnies at #webcenterJoin the discussion on LinkedIn! And follow the conversation on Twitter!

    Read the article

  • BREAKING NEWS: Bunny Inc. becomes a Social Enterprise

    - by kellsey.ruppel(at)oracle.com
    Bunny what? Is your business adaptive, agile, innovative, productive… profitable? No? Wondering how to make it so?Social Enterprise is gaining ground as a global trend to accelerate business performance by better engaging employees, partners and customers.Starting with this post we are looking forward to stimulate an open conversation on the benefits, the stumbling blocks and the best practices of the Enterprise 2.0 journey… but with a bunny smile!Is Social Enterprise revolutionary or evolutionary? How does it impact traditional systems (such as ERP, CRM, BPM, Portals)? How do you measure it? How do you avoid major mistakes?We want to share our vision and to hear from you. Tell us what you did, what you are going to do and what you would never do with social and ... start looking for the invasion of the #e20bunnies at #webcenterJoin the discussion on LinkedIn! And follow the conversation on Twitter!

    Read the article

  • What's Your Supply Chain+Manufacturing Strategy for Success

    - by [email protected]
    Forward thinking enterprises look to eliminate their dependence on legacy applications that manage information in batch - replacing them with real-time integrated/modern information managment. With rapid manufacturing and global supply chains much more complex today, with the pace of chance ever increasing, leading organizations need better ways to orchestrate their supply chain synchronization with their partner and customer base. EM magazine Mar/Apr'10 edition, covers this topic in an article "Strategising for Success" pgs 26-27, and discusses the available options to organizations as they drive improvements in the levels of collaboration with their partners, suppliers, shippers, distributors and ultimately their end-users, the customer! I'll past the link to the article here as soon as i validate/confirm it.

    Read the article

  • The Virtues and Challenges of Implementing Basel III: What Every CFO and CRO Needs To Know

    - by Jenna Danko
    The Basel Committee on Banking Supervision (BCBS) is a group tasked with providing thought-leadership to the global banking industry.  Over the years, the BCBS has released volumes of guidance in an effort to promote stability within the financial sector.  By effectively communicating best-practices, the Basel Committee has influenced financial regulations worldwide.  Basel regulations are intended to help banks: More easily absorb shocks due to various forms of financial-economic stress Improve risk management and governance Enhance regulatory reporting and transparency In June 2011, the BCBS released Basel III: A global regulatory framework for more resilient banks and banking systems.  This new set of regulations included many enhancements to previous rules and will have both short and long term impacts on the banking industry.  Some of the key features of Basel III include: A stronger capital base More stringent capital standards and higher capital requirements Introduction of capital buffers  Additional risk coverage Enhanced quantification of counterparty credit risk Credit valuation adjustments  Wrong  way risk  Asset Value Correlation Multiplier for large financial institutions Liquidity management and monitoring Introduction of leverage ratio Even more rigorous data requirements To implement these features banks need to embark on a journey replete with challenges. These can be categorized into three key areas: Data, Models and Compliance. Data Challenges Data quality - All standard dimensions of Data Quality (DQ) have to be demonstrated.  Manual approaches are now considered too cumbersome and automation has become the norm. Data lineage - Data lineage has to be documented and demonstrated.  The PPT / Excel approach to documentation is being replaced by metadata tools.  Data lineage has become dynamic due to a variety of factors, making static documentation out-dated quickly.  Data dictionaries - A strong and clean business glossary is needed with proper identification of business owners for the data.  Data integrity - A strong, scalable architecture with work flow tools helps demonstrate data integrity.  Manual touch points have to be minimized.   Data relevance/coverage - Data must be relevant to all portfolios and storage devices must allow for sufficient data retention.  Coverage of both on and off balance sheet exposures is critical.   Model Challenges Model development - Requires highly trained resources with both quantitative and subject matter expertise. Model validation - All Basel models need to be validated. This requires additional resources with skills that may not be readily available in the marketplace.  Model documentation - All models need to be adequately documented.  Creation of document templates and model development processes/procedures is key. Risk and finance integration - This integration is necessary for Basel as the Allowance for Loan and Lease Losses (ALLL) is calculated by Finance, yet Expected Loss (EL) is calculated by Risk Management – and they need to somehow be equal.  This is tricky at best from an implementation perspective.  Compliance Challenges Rules interpretation - Some Basel III requirements leave room for interpretation.  A misinterpretation of regulations can lead to delays in Basel compliance and undesired reprimands from supervisory authorities. Gap identification and remediation - Internal identification and remediation of gaps ensures smoother Basel compliance and audit processes.  However business lines are challenged by the competing priorities which arise from regulatory compliance and business as usual work.  Qualification readiness - Providing internal and external auditors with robust evidence of a thorough examination of the readiness to proceed to parallel run and Basel qualification  In light of new regulations like Basel III and local variations such as the Dodd Frank Act (DFA) and Comprehensive Capital Analysis and Review (CCAR) in the US, banks are now forced to ask themselves many difficult questions.  For example, executives must consider: How will Basel III play into their Risk Appetite? How will they create project plans for Basel III when they haven’t yet finished implementing Basel II? How will new regulations impact capital structure including profitability and capital distributions to shareholders? After all, new regulations often lead to diminished profitability as well as an assortment of implementation problems as we discussed earlier in this note.  However, by requiring banks to focus on premium growth, regulators increase the potential for long-term profitability and sustainability.  And a more stable banking system: Increases consumer confidence which in turn supports banking activity  Ensures that adequate funding is available for individuals and companies Puts regulators at ease, allowing bankers to focus on banking Stability is intended to bring long-term profitability to banks.  Therefore, it is important that every banking institution takes the steps necessary to properly manage, monitor and disclose its risks.  This can be done with the assistance and oversight of an independent regulatory authority.  A spectrum of banks exist today wherein some continue to debate and negotiate with regulators over the implementation of new requirements, while others are simply choosing to embrace them for the benefits I highlighted above. Do share with me how your institution is coping with and embracing these new regulations within your bank. Dr. Varun Agarwal is a Principal in the Banking Practice for Capgemini Financial Services.  He has over 19 years experience in areas that span from enterprise risk management, credit, market, and to country risk management; financial modeling and valuation; and international financial markets research and analyses.

    Read the article

  • Facing problem with "gtk.RESPONSE_OK" in the simple-player quickly tutorial

    - by sumit_gt
    I am fairly new to both quickly and Python. I am facing several problems while learning to use quickly from the following tutorial on the Ubuntu developers site: http://developer.ubuntu.com/resources/app-developer-cookbook/multimedia/creating-a-simple-media-player/ The following error I'm unable to understand: Traceback (most recent call last): File "/home/sumit/Sumit/simple-player/simple_player/SimplePlayerWindow.py", line 36, in on_openbutton_clicked if response==gtk.RESPONSE_OK: NameError: global name 'gtk' is not defined I realize that I am supposed to import something, so I tried to add import gtk which it didn't work and it gave the following error: from gtk import _gtk /usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py:40: Warning: g_type_get_qdata: assertion `node != NULL' failed from gtk import _gtk I have followed every step of the tutorials so far. But there is no mention of any other imports other that "prompts" and "os". Please help. Contribution of Agmenor, facing the same problem: I also tried to replace the text if response == gtk.RESPONSE_OK: by if response == Gtk.RESPONSE_OK: (notice the capital G). This gives another error: AttributeError: 'gi.repository.Gtk' object has no attribute 'RESPONSE_OK'

    Read the article

  • Oracle lanza una comunidad específica para hardware

    - by Eloy M. Rodríguez
    Para aquellos que aún no lo conozcan, quiero presentarles un grupo de interés creado por la compañía en Facebook con el nombre de Oracle Hardware Social Media Hub con el fin de ofrecer un lugar de reunión en la red en el que encontrar a miles de expertos, clientes, partners y reconocidos líderes de Oracle para debatir y descubrir lo último de Oracle. Allí encontrará una pionera aplicación de preguntas y respuestas denominada Pregunte al Experto de Oracle, en donde podrá formular preguntas, aportar comentarios e incluso ser premiado por sus aportaciones especializadas con el título de líderes reconocidos. En el Hardware Hub se podrá, entre otras cosas: Obtener contenidos exclusivos, solamente para los miembros Compartir sus conocimientos y experiencias con una comunidad global Comunicarse con expertos de Oracle en un entorno informal Descubrir métodos innovadores para optimizar el rendimiento de su hardware Acceder a contenidos en su idioma, incluyendo información de eventos, Webcasts, informes técnicos y mucho más.

    Read the article

  • SUN Customers and Partners, preview My Oracle Support

    - by chris.warticki
    Preview My Oracle Support - now! Take advantage of My Oracle Support before full migration. Oracle Global Customer Support invites you to preview some of the support platform's key capabilities. With the preview to My Oracle Support, Sun customers and partners can have immediate access to: My Oracle Support Community, with live advisor webcasts, active moderation by Oracle/Sun support engineers, user interaction, best practices presentations, and news and announcements Knowledgebase, with more than 900,000 articles, including more than 100,000 Sun Support articles and documents.   -Chris Warticki twittering @cwarticki Join one of the Twibes - http://twibes.com/MyOracleSupport or http://twibes.com/OracleSupport

    Read the article

  • Should a programmer take writing lessons to enhance code expressiveness?

    - by Jose Faeti
    Given that programmers are authors and write code to express abstract thoughts and concepts, and good code should be read by other programmers without difficulties and misunderstandings, should a programmer take writing lessons to write better code? Abstracting concepts and real world problems/entities is an important part of writing good code, and a good mastery of the language used for coding should allow the programmer to express his thoughts more easily, or in a better way. Besides, when trying to write or rewrite some code to make it better, much time can be spent in deciding the names for functions, variables or data structures. I think this could also help to avoid writing code with more than one meaning, often cause of misunderstanding between different programmers. Code should always express clearly its function unambiguously.

    Read the article

  • LaTeX-like display programming environment

    - by Gage
    I used to be a hobbyist programmer, but now I'm also a fairly experienced physicist and find myself programming to solve certain problems quite a lot. In physics, we use variables with superscripts, subscripts, italics, underlines, etc etc. To bridge this gap to the computer we usually use LaTeX. Now, I generally use MATLAB for handling any data and such, and find it very irritating that I can't basically use LaTeX for variable names. Something as simple as sy has to be named either sigma_y or some descriptive name like peak_height_error. I don't necessarily want full on workable LaTeX in my code, but I do want to be able to use greek letters and super/sub-scripts at the very least. Does this exist?

    Read the article

  • The Information Driven Value Chain - Part 1

    - by Paul Homchick
    One hundred years ago, there were places on Earth that no man had ever seen.  Today, a man standing in one of those places can instantaneously communicate with someone who may be strolling down the street on his way to lunch half way around the globe.  Our world is shrinking and becoming virtual. It is a world of incredible bounty and speed where we can get a product delivered to us anywhere on earth within a day or two. However, this world is also one of challenge where volatility, uncertainty, risk and chaos are our daily companions. To prosper amid the realities of this new world, the enterprise needs a business model. Globalization and instant communications demand greater operational flexibility than ever before. Extended supply chains have elevated the management of risk to a central concern, and regulatory demands from multiple governments place an increasing burden of compliance on companies. Finally, the speed of today's business requires continuous innovation to keep from falling behind the global competition.

    Read the article

  • 15 Oracle Winners at Progressive Manufacturing 100 Awards Event

    - by [email protected]
    Oracle is pleased to congratulate its 15 winners for the PM100 awards program at the Breakers Hotel in Palm Beach Florida, May 3-5, 2010.  The Progressive Manufacturing Summit is where today's top manufacturing executives  come together and share their strategies, experiences and best practices on becoming more competitive in today's global market. The format is extremely interactive, providing the rarest of opportunities to participate in a high level conversation with leaders in supply chain and manufacturing. Attendees walk away with new insights and strategies on growing and moving their business forward, new contacts and a tangible action plan to address a tough. For more information. Event: http://www.managingautomation.com/summit/index.aspx Winners: http://www.managingautomation.com/awards/winners.aspx  

    Read the article

  • Code Monster Helps Introduce Kids (and Curious Adults) to the Basics of Programming

    - by Jason Fitzpatrick
    If you’re looking for a fun way to introduce a kid to programming (or sate your own curiosity), Crunchzilla’s Code Monster is a real-time introduction to basic programming concepts. How does Code Monster work? Users are guided through the programming experience (using JavaScript) by a talkative blue monster that asks questions about the code and suggests courses of action. Play long enough and you travel from simple variables to more complex ideas like conditionals, expressions, and more. It’s not a comprehensive programming curriculum (nor does it claim to be) but it’s a great way to introduce people of all ages to programming. Hit up the link below to take it for a spin. Code Monster [via O'Reilly Radar] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • Java - learning / migrating fast

    - by Yippie-Kai-Yay
    Hello! This is not one of those questions like "How do I learn Java extremely fast, I know nothing about programming, but I heard Java is cool, yo". I have an interview for a Java Software Developer in a couple of weeks and the thing is that I think that I know C++ really good and I am somewhat good at C# (like, here I can probably answer on a lot of questions related to these languages), but I have almost zero experience with Java. I have a lot of projects written in both languages, I participiated in several open-source projects (mostly C++, though). Now, what should I do (in your opinion) to prepare myself for this Java interview. I guess migrating from C# to Java should be kind of fast, especially when you know a lot about programming in global, patterns, modern techniques and have a lot of practical experience behind you. But still two weeks is obviously not enough to get Java in-depth - so what should I focus on to have the best chances to pass the interview? Thank you.

    Read the article

  • Should *'s go next to the type or the variable name? [closed]

    - by derekerdmann
    Possible Duplicate: int* i; or int *i; or int * i; When working in C or C++, how should pointers be declared? Like this: char* derp; or this: char *derp; I typically use the first method, because the variable is a character pointer, but I know that it can create confusion when declaring multiple variables at once: char* herp, derp; herp becomes a character pointer, while derp is just a character. I know it often comes down to coding style, but which one is "better?" Should I sacrifice clarity to eliminate potential confusion?

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >