Search Results

Search found 13889 results on 556 pages for 'results'.

Page 384/556 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • A New Home for E-Business Suite Customer Adoption Information

    - by linda.fishman.hoyle
    Phew! I made it! A new home with my name. Let's talk about E-Business Suite. So much is going on and more and more customers are upgrading and implementing the latest release. I think I will highlight in this blog entry the most recent press release we issued 2 weeks ago about our Applications Unlimited success but in the release, we name several customers who are live on E-Business Suite Release 12.1 and then have a fabulous quote from a customer who is doing great things with our product.   Here is a link to the press release To make it easy for you, I am pulling out just the E-Business Suite information Oracle E-Business Suite: Oracle® E-Business Suite Release 12.1 provides organizations of all sizes, across all industries and regions, with a global business foundation that helps them reduce costs and increase productivity through a portfolio of rapid value solutions, integrated business processes and industry-focused solutions. The latest version of the Oracle E-Business Suite was designed to help organizations make better decisions and be more competitive by providing a global or holistic view of their operations. Abu Dhabi Media Company, Agilysis, C3 Business Solutions, Chicago Public Schools, Datacard Group, Guidance Software, Leviton Manufacturing, McDonald's, MINOR International, Usana Health Sciences, Zamil Plastic Industries Ltd. and Zebra Technologies are just a few of the organizations that have deployed the latest release of the Oracle E-Business Suite to help them make better decisions and be more competitive, while lowering costs and increasing performance. Customer Speaks "Leviton Manufacturing makes a very diverse line of products including electrical devices and data center products that we sell globally. We upgraded to the latest version of the Oracle E-Business Suite Release 12.1 to support our service business with change management, purchasing, accounts payable, and our internal IT help desk," said Bob MacTaggart, CIO of Leviton Manufacturing. "We consolidated seven Web sites that we used to host individually onto iStore. In addition, we run a site, using the Oracle E-Business Suite configurator, pricing and quoting for our sales agents to do configuration work. This site can now generate a complete sales proposal using Oracle functionality; we actually generate CAD drawings - the actual drawings themselves - based on configuration results. It used to take six to eight weeks to generate these drawings and now it's all done online in an hour to two hours by our sales agents themselves, totally self-service. It does everything they need. From our point of view that is a major business success. Not only is it a very cool, innovative application, but it also puts us about two years ahead of our competition."

    Read the article

  • USB drives not recognized all of a sudden (module usb_storage not loading)

    - by Siddharth
    I am very close to the solution, just need to know how to get usb-storage to load I have tried most of the advice on askubuntu and other sites, usb_storage enable to fdisk -l. But I am unable to find steps to get it working again. sudo lsusb results Bus.... skipped 4 lines Bus 004 Device 002: ID 413c:3012 Dell Computer Corp. Optical Wheel Mouse Bus 005 Device 002: ID 413c:2105 Dell Computer Corp. Model L100 Keyboard Bus 001 Device 005: ID 8564:1000 sudo dmseg | tail reports [ 69.567948] usb 1-4: USB disconnect, device number 4 [ 74.084041] usb 1-6: new high-speed USB device number 5 using ehci_hcd [ 74.240484] Initializing USB Mass Storage driver... [ 74.256033] scsi5 : usb-storage 1-6:1.0 [ 74.256145] usbcore: registered new interface driver usb-storage [ 74.256147] USB Mass Storage support registered. [ 74.257290] usbcore: deregistering interface driver usb-storage fdisk -l reports Device Boot Start End Blocks Id System /dev/sda1 * 2048 972656639 486327296 83 Linux /dev/sda2 972658686 976771071 2056193 5 Extended /dev/sda5 972658688 976771071 2056192 82 Linux swap / Solaris I think I need steps to install and get usb_storage module working. Edit : I tried sudo modprobe -v usb-storage reports sudo modprobe -v usb-storage insmod /lib/modules/3.2.0-48-generic-pae/kernel/drivers/usb/storage/usb-storage.ko Edit : jsiddharth@siddharth-desktop:~$ sudo udevadm monitor --udev monitor will print the received events for: UDEV - the event which udev sends out after rule processing UDEV [4757.144372] add /module/usb_storage (module) UDEV [4757.146558] remove /module/usb_storage (module) UDEV [4757.148707] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6 (usb) UDEV [4757.149699] add /bus/usb/drivers/usb-storage (drivers) UDEV [4757.151214] remove /bus/usb/drivers/usb-storage (drivers) UDEV [4757.156873] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0 (usb) UDEV [4757.160903] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) UDEV [4757.164672] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165163] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165440] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) Narrowing down more : Seems like I need usb_storage to load as a module jsiddharth@siddharth-desktop:~$ lsmod | grep usb usbserial 37201 0 usbhid 41937 0 hid 77428 1 usbhid Still no usb driver mounted. Nor does a device show up in /dev. Any step by step process to debug and fix this will be really helpful.

    Read the article

  • SBUG Session: The Enterprise Cache

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] I did a session on "The Enterprise Cache" at the UK SOA/BPM User Group yesterday which generated some useful discussion. The proposal was for a dedicated caching layer which all app servers and service providers can hook into, sharing resources and common data. The architecture might end up like this: I'll update this post with a link to the slide deck once it's available. The next session will have Udi Dahan walking through nServiceBus, register on EventBrite if you want to come along. Synopsis Looked at the benefits and drawbacks of app-centric isolated caches, compared to an enterprise-wide shared cache running on dedicated nodes; Suggested issues and risks around caching including staleness of data, resource usage, performance and testing; Walked through a generic service cache implemented as a WCF behaviour – suitable for IIS- or BizTalk-hosted services - which I'll be releasing on CodePlex shortly; Listed common options for cache providers and their offerings. Discussion Cache usage. Different value propositions for utilising the cache: improved performance, isolation from underlying systems (e.g. service output caching can have a TTL large enough to cover downtime), reduced resource impact – CPU, memory, SQL and cost (e.g. caching results of paid-for services). Dedicated cache nodes. Preferred over in-host caching provided latency is acceptable. Depending on cache provider, can offer easy scalability and global replication so cache clients always use local nodes. Restriction of AppFabric Caching to Windows Server 2008 not viewed as a concern. Security. Limited security model in most cache providers. Options for securing cache content suggested as custom implementations. Obfuscating keys and serialized values may mean additional security is not needed. Depending on security requirements and architecture, can ensure cache servers only accessible to cache clients via IPsec. Staleness. Generally thought to be an overrated problem. Thinking in line with eventual consistency, that serving up stale data may not be a significant issue. Good technical arguments support this, although I suspect business users will be harder to persuade. Providers. Positive feedback for AppFabric Caching – speed, configurability and richness of the distributed model making it a good enterprise choice. .NET port of memcached well thought of for performance but lack of replication makes it less suitable for these shared scenarios. Replicated fork – repcached – untried and less active than memcached. NCache also well thought of, but Express version too limited for enterprise scenarios, and commercial versions look costly compared to AppFabric.

    Read the article

  • Real-time Big Data Analytics is a reality for StubHub with Oracle Advanced Analytics

    - by Mark Hornick
    What can you use for a comprehensive platform for real-time analytics? How can you process big data volumes for near-real-time recommendations and dramatically reduce fraud? Learn in this video what Stubhub achieved with Oracle R Enterprise from the Oracle Advanced Analytics option to Oracle Database, and read more on their story here. Advanced analytics solutions that impact the bottom line of a business are challenging due to the range of skills and individuals involved in realizing such solutions. While we hear a lot about the role of the data scientist, that role is but one piece of the puzzle. Advanced analytics solutions also have an operationalization aspect that also requires close proximity to where the transactional activity occurs. The data scientist needs access to the right data with which to model the business problem. This involves IT for data collection, management, and administration, as well as ensuring zero downtime (a website needs to be up 24x7). This also involves working with the data scientist to keep predictive models refreshed with the latest scripts. Integrating advanced analytics solutions into enterprise apps involves not just generating predictions, but supporting the whole life-cycle from data collection, to model building, model assessment, and then outcome assessment and feedback to the model building process again. Application and web interface designers need to take into account how end users will see and use the advanced analytics results, e.g., supporting operations staff that need to handle the potentially fraudulent transactions. As just described, advanced analytics projects can be "complicated" from just a human perspective. The extent to which software can simplify the interactions among users and systems will increase the likelihood of project success. The ability to quickly operationalize advanced analytics projects and demonstrate measurable value, means the difference between a successful project and just a nice research report. By standardizing on Oracle Database and SQL invocation of R, along with in-database modeling as found in Oracle Advanced Analytics, expedient model deployment and zero downtime for refreshing models becomes a reality. Meanwhile, data scientists are also able to explore leading edge techniques available in open source. The Oracle solution propels the entire organization forward to realize the value of advanced analytics.

    Read the article

  • Talking JavaOne with Rock Star Adam Bien

    - by Janice J. Heiss
    Among the most celebrated developers in recent years, especially in the domain of Java EE and JavaFX, is consultant Adam Bien, who, in addition to being a JavaOne Rock Star for Java EE sessions given in 2009 and 20011, is a Java Champion, the winner of Oracle Magazine’s 2011 Top Java Developer of the Year Award, and recently won a 2012 JAX Innovation Award as a top Java Ambassador. Bien will be presenting the following sessions: TUT3907 - Java EE 6/7: The Lean Parts CON3906 - Stress-Testing Java EE 6 Applications Without Stress CON3908 - Building Serious JavaFX 2 Applications CON3896 - Interactive Onstage Java EE Overengineering I spoke with Bien to get his take on Java today. He expressed excitement that the smallest companies and startups are showing increasing interest in Java EE. “This is a very good sign,” said Bien. “Only a few years ago J2EE was mostly used by larger companies -- now it becomes interesting even for one-person shows. Enterprise Java events are also extremely popular. On the Java SE side, I'm really excited about Project Nashorn.” Nashorn is an upcoming JavaScript engine, developed fully in Java by Oracle, and based on the Da Vinci Machine (JSR 292) which is expected to be available for Java 8.   Bien expressed concern about a common misconception regarding Java's mediocre productivity. “The problem is not Java,” explained Bien, “but rather systems built with ancient patterns and approaches. Sometimes it really is ‘Cargo Cult Programming.’ Java SE/EE can be incredibly productive and lean without the unnecessary and hard-to-maintain bloat. The real problems are ‘Ivory Towers’ and not Java’s lack of productivity.” Bien remarked that if there is one thing he wanted Java developers to understand it is that, "Premature optimization is the root of all evil. Or at least of some evil. Modern JVMs and application servers are hard to optimize upfront. It is far easier to write simple code and measure the results continuously. Identify the hotspots first, then optimize.” He advised Java EE developers to, “Rethink everything you know about Enterprise Java. Before you implement anything, ask the question: ‘Why?’ If there is no clear answer -- just don't do it. Most well known best practices are outdated. Focus your efforts on the domain problem and not the technology.” Looking ahead, Bien said, “I would like to see open source application servers running directly on a hypervisor. Packaging the whole runtime in a single file would significantly simplify the deployment and operations.”Check out a recent Java Magazine interview with Bien about his Java EE 6 stress monitoring tool here. Originally published on blogs.oracle.com/javaone.

    Read the article

  • Installing Canon LBP6000 in Ubuntu 12.04

    - by MMA
    This is really frustrating. I am trying to install LBP6000 in Ubuntu 12.04 without any success. (Well, I had success about a week back when I first bought the printer and finally printed pages after a struggle of several hours. Then today it suddenly stopped working and I uninstalled everything and started from scratch. Now, I seem to have lost the way.) My steps Downloaded the latest Canon driver from Canon site. File Linux_CAPT_PrinterDriver_V240_uk_EN.tar.gz Got the radu script (I am allowed only two hyperlinks, so can not put the link here. You can Google radu Canon) Changed the /etc/modprobe.d/blacklist-cups-usblp.conf file as instructed in, Official Documentation. (See the section Ubuntu 12.04 Install). Now this file looks like, # cups talks to the raw USB devices, so we need to blacklist usblp to avoid # grabbing them # blacklist usblp Rebooted my machine Changed the port in radu script to 59787 as instructed in the link at step 3. (Again see the section Ubuntu 12.04 Install, or see the comment at How to Install Canon LBP Printers in Ubuntu. Also put the latest deb files from step 1 in the appropriate directory of this script. Ran the radu script. A printer, LBP6000 got added. Not two printers, one to be disabled, as appeared in the message on the terminal after running the script. sudo /etc/init.d/ccpd status shows, Canon Printer Daemon for CUPS: ccpd: 3142 3139 Results The printer does not print. Printer state (from System Setting-Printing, or at cups http interface localhost:631/printers/LBP6000) goes from Idle to Processing, a job appears in print queue, and then the job disappears and the printer state goes back to Idle. The actual printer does not even blink. Diagnostics (got help from the link in step 3, Troubleshooting) captstatusui -P LBP6000 shows communication error lsmod | grep usblp did not show anything. After running, sudo modprobe usblp, shows usblp 17885 0 However, ls -l /dev/usb/lp0 gives, ls: cannot access /dev/usb/lp0: No such file or directory /var/ccpd did not exist, created, sudo mkdir /var/ccpd sudo mkfifo /var/ccpd/fifo0 sudo chown -R lp:lp /var/ccpd Any suggestion will be appreciated. Do not know what to do.

    Read the article

  • What Is Nuclear Meltdown?

    - by Gopinath
    Japan was first hit by a massive earth quake, then a ruthless tsunami washed away thousands of homes and now they fear the worst – meltdown of nuclear power stations in the quake hit year. Nuclear meltdowns are horrifying – remember the Chernobyl incident in Russia? The Chernobyl reactor meltdown released 400 times more radio active material than the atomic bombing of Hiroshima. The effects of nuclear meltdowns are beyond imagination of a common man, thousands of people loose their lives and many more lakhs of people suffer with radiation related diseases for many years. Nuclear Meltdowns are dangerous, but how do they happen? What causes a nuclear meltdown? In simple terms – Nuclear meltdown is an accident that happens due to severe overheating of a nuclear reactor and results in release of nuclear radiation into the environment.  How A Nuclear Meltdown Happens? According to Wikipedia A meltdown occurs when a severe failure of a nuclear power plant system prevents proper cooling of the reactor core, to the extent that the nuclear fuel assemblies overheat and melt. A meltdown is considered very serious because of the potential that radioactive materials could be released into the environment. The fuel assemblies in a reactor core can melt if heat is not removed. A nuclear reactor does not have to remain critical for a core damage incident to occur, because decay heat continues to heat the reactor fuel assemblies after the reactor has shut down, though this heat decreases with time. A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss of pressure control accident, a loss of coolant accident (LOCA), an uncontrolled power excursion or, in some types, a fire within the reactor core. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth, ensure that multiple layers of safety systems are always present to make such accidents unlikely. Video – What Causes Nuclear Meltdown AlJazeera news has a good analysis on feared nuclear meltdown of Japan’s nuclear plants and also an animation on what causes Nuclear Meltdown. cc image credit: flickr/jtjdt This article titled,What Is Nuclear Meltdown?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Solaris Tips : Assembler, Format, File Descriptors, Ciphers & Mount Points

    - by Giri Mandalika
    .roundedcorner { border:1px solid #a1a1a1; padding:10px 40px; border-radius:25px; } .boxshadow { padding:10px 40px; box-shadow: 10px 10px 5px #888888; } 1. Most Oracle software installers need assembler Assembler (as) is not installed by default on Solaris 11.      Find and install eg., # pkg search assembler INDEX ACTION VALUE PACKAGE pkg.fmri set solaris/developer/assembler pkg:/developer/[email protected] # pkg install pkg:/developer/assembler Assembler binary used to be under /usr/ccs/bin directory on Solaris 10 and prior versions.      There is no /usr/ccs/bin on Solaris 11. Contents were moved to /usr/bin 2. Non-interactive retrieval of the entire list of disks that format reports If the format utility cannot show the entire list of disks in a single screen on stdout, it shows some and prompts user to - hit space for more or s to select - to move to the next screen to show few more disks. Run the following command(s) to retrieve the entire list of disks in a single shot. format 3. Finding system wide file descriptors/handles in use Run the following kstat command as any user (privileged or non-privileged). kstat -n file_cache -s buf_inuse Going through /proc (process filesystem) is less efficient and may lead to inaccurate results due to the inclusion of duplicate file handles. 4. ssh connection to a Solaris 11 host fails with error Couldn't agree a client-to-server cipher (available: aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,arcfour) Solution: add 3des-cbc to the list of accepted ciphers to sshd configuration file. Steps: Append the following line to /etc/ssh/sshd_config Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour128,arcfour256,\ arcfour,3des-cbc Restart ssh daemon svcadm -v restart ssh 5. UFS: Finding the last mount point for a device fsck utility reports the last mountpoint on which the filesystem was mounted (it won't show the mount options though). The filesystem should be unmounted when running fsck. eg., # fsck -n /dev/dsk/c0t5000CCA0162F7BC0d0s6 ** /dev/rdsk/c0t5000CCA0162F7BC0d0s6 (NO WRITE) ** Last Mounted on /export/oracle ** Phase 1 - Check Blocks and Sizes ... ...

    Read the article

  • USB drives not recognized all of a sudden (usb_storage not loaded lsmod does not report usb_storage)

    - by Siddharth
    I have tried most of the advice on askubuntu and other sites, usb_storage enable to fdisk -l. But I am unable to find steps to get it working again. sudo lsusb results Bus.... skipped 4 lines Bus 004 Device 002: ID 413c:3012 Dell Computer Corp. Optical Wheel Mouse Bus 005 Device 002: ID 413c:2105 Dell Computer Corp. Model L100 Keyboard Bus 001 Device 005: ID 8564:1000 sudo dmseg | tail reports [ 69.567948] usb 1-4: USB disconnect, device number 4 [ 74.084041] usb 1-6: new high-speed USB device number 5 using ehci_hcd [ 74.240484] Initializing USB Mass Storage driver... [ 74.256033] scsi5 : usb-storage 1-6:1.0 [ 74.256145] usbcore: registered new interface driver usb-storage [ 74.256147] USB Mass Storage support registered. [ 74.257290] usbcore: deregistering interface driver usb-storage fdisk -l reports Device Boot Start End Blocks Id System /dev/sda1 * 2048 972656639 486327296 83 Linux /dev/sda2 972658686 976771071 2056193 5 Extended /dev/sda5 972658688 976771071 2056192 82 Linux swap / Solaris I think I need steps to install and get usb_storage module working. Edit : I tried sudo modprobe -v usb-storage reports sudo modprobe -v usb-storage insmod /lib/modules/3.2.0-48-generic-pae/kernel/drivers/usb/storage/usb-storage.ko Edit : jsiddharth@siddharth-desktop:~$ sudo udevadm monitor --udev monitor will print the received events for: UDEV - the event which udev sends out after rule processing UDEV [4757.144372] add /module/usb_storage (module) UDEV [4757.146558] remove /module/usb_storage (module) UDEV [4757.148707] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6 (usb) UDEV [4757.149699] add /bus/usb/drivers/usb-storage (drivers) UDEV [4757.151214] remove /bus/usb/drivers/usb-storage (drivers) UDEV [4757.156873] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0 (usb) UDEV [4757.160903] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) UDEV [4757.164672] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165163] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165440] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) Narrowing down more : Seems like I need usb_storage to load as a module jsiddharth@siddharth-desktop:~$ lsmod | grep usb usbserial 37201 0 usbhid 41937 0 hid 77428 1 usbhid Still no usb driver mounted. Nor does a device show up in /dev. Any step by step process to debug and fix this will be really helpful.

    Read the article

  • JavaOne Rock Star – Adam Bien

    - by Janice J. Heiss
    Among the most celebrated developers in recent years, especially in the domain of Java EE and JavaFX, is consultant Adam Bien, who, in addition to being a JavaOne Rock Star for Java EE sessions given in 2009 and 2011, is a Java Champion, the winner of Oracle Magazine’s 2011 Top Java Developer of the Year Award, and recently won a 2012 JAX Innovation Award as a top Java Ambassador. Bien will be presenting the following sessions: TUT3907 - Java EE 6/7: The Lean Parts CON3906 - Stress-Testing Java EE 6 Applications Without Stress CON3908 - Building Serious JavaFX 2 Applications CON3896 - Interactive Onstage Java EE Overengineering I spoke with Bien to get his take on Java today. He expressed excitement that the smallest companies and startups are showing increasing interest in Java EE. “This is a very good sign,” said Bien. “Only a few years ago J2EE was mostly used by larger companies -- now it becomes interesting even for one-person shows. Enterprise Java events are also extremely popular. On the Java SE side, I'm really excited about Project Nashorn.” Nashorn is an upcoming JavaScript engine, developed fully in Java by Oracle, and based on the Da Vinci Machine (JSR 292) which is expected to be available for Java 8.    Bien expressed concern about a common misconception regarding Java's mediocre productivity. “The problem is not Java,” explained Bien, “but rather systems built with ancient patterns and approaches. Sometimes it really is ‘Cargo Cult Programming.’ Java SE/EE can be incredibly productive and lean without the unnecessary and hard-to-maintain bloat. The real problems are ‘Ivory Towers’ and not Java’s lack of productivity.” Bien remarked that if there is one thing he wanted Java developers to understand it is that, "Premature optimization is the root of all evil. Or at least of some evil. Modern JVMs and application servers are hard to optimize upfront. It is far easier to write simple code and measure the results continuously. Identify the hotspots first, then optimize.”   He advised Java EE developers to, “Rethink everything you know about Enterprise Java. Before you implement anything, ask the question: ‘Why?’ If there is no clear answer -- just don't do it. Most well known best practices are outdated. Focus your efforts on the domain problem and not the technology.” Looking ahead, Bien remarked, “I would like to see open source application servers running directly on a hypervisor. Packaging the whole runtime in a single file would significantly simplify the deployment and operations.” Check out a recent Java Magazine interview with Bien about his Java EE 6 stress monitoring tool here.

    Read the article

  • SQL SERVER – Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function

    - by pinaldave
    Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: A Puzzle – Swap Value of Column Without Case Statement,there were more than 50 solutions proposed in the comment. There were many creative solutions. I have mentioned my personal favorite (different ones) here: Solution of Puzzle – Swap Value of Column Without Case Statement. However, I received lots of questions regarding one of the Solution by SIJIN KUMAR V P. He has used the function SOUNDEX in his solution. The request was to explain how SOUNDEX and DIFFERENCE works. Well, there are pretty decent documentations provided over here SOUNDEX function and DIFFERENCE over on MSDN and if I attempt to explain this function I will end up writing the same details which are available on MSDN. Instead of writing theory, we will try to learn this function by using a couple of simple puzzles. You try to solve the puzzles using the MSDN and see if you can learn something very quickly. In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. The first character of the code is the first character of character_expression and the second through fourth characters of the code are numbers that represent the letters in the expression. Vowels incharacter_expression are ignored unless they are the first letter of the string. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. The return value ranges from 0 through 4: 0 indicates weak or no similarity, and 4 indicates strong similarity or the same values. Learning Puzzle 1: Now let us run following four queries and observe its output. SELECT SOUNDEX('SQLAuthority') SdxValue SELECT SOUNDEX('SLTR') SdxValue SELECT SOUNDEX('SaLaTaRa') SdxValue SELECT SOUNDEX('SaLaTaRaM') SdxValue When you look at the result set all the four values are same. The reason for all the values to be same is as for SQL Server SOUNDEX function all the four strings are similarly sounding string. Learning Puzzle 2: Now let us run following five queries and observe its output. SELECT DIFFERENCE (SOUNDEX('SLTR'),SOUNDEX('SQLAuthority')) SELECT DIFFERENCE (SOUNDEX('TH'),SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SQLAuthority',SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SLTR',SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SLTR','SQLAuthority') When you look at the result set you will get the result in the ranges from 1 to 4. Here is how it works if your result is 0 which means absolutely not relevant to each other and if your result is 1 which means the results are relevant to each other. Have you ever used above two functions in your business need or on production server? If yes, would you please leave a comment with use cases. I believe it will be beneficial to everyone. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Forced Learning

    - by Ben Griswold
    If you ask me, it can be a little intimidating to stand in front of a group and walkthrough anything remotely technical. Even if you know “Technical Thingy #52” inside and out, public speaking can be unsettling.  And if you don’t have your stuff together, well, it can be downright horrifying. With that said, if given the choice, I still like to schedule myself to present on unfamiliar topics. Over the past few months, I’ve talked about Aspect-Oriented Programming, Functional Programming, Lean Software Development and Kanban Systems, Domain-Driven Design and Behavior Driven Development.  What do these topics have in common? You guessed it: I was truly interested in them. I had only a superficial understanding of each. Huh?  Why in the world would I ever want to to put myself in that intimidating situation? Actually, I rarely want to put myself into that situation but I often do as I like the results.  There’s nothing remotely clever going on here.  All I’m doing is putting myself into a compromising situation knowing that I’ll likely work myself out of it by learning the topic prior to the presentation.  I’m simply time-boxing myself to learn something new while knowing there are negative repercussions if I fall short. So, I end up doing tons of research and I learn bunches to ensure I have my head firmly wrap around the material before my talk. I’m not saying I become an expert overnight (or over a couple of weeks) but I’ll definitely know enough to be confident and comfortable and I’ll know more than enough to ensure the audience will learn a thing or two from me. It’s forced learning and though it might sound a little scary to some, it works for me. Now I could very easily rename this post to something like Fear Is My Motivator because, in a sense, fear of failure and embarrassment is what’s driving my learning. However, I’m the guy signing up for the presentation and since the entire process is self-imposed I’m not sure Fear deserves too much credit.  Anyway…

    Read the article

  • Excel Solver vs Solver Foundation

    - by JoshReuben
    I recently read a book http://www.amazon.com/Scientific-Engineering-Cookbook-Cookbooks-OReilly/dp/0596008791/ref=sr_1_1?ie=UTF8&s=books&qid=1296593374&sr=8-1 - the Excel Scientific and Engineering Cookbook.     The 2 main tools that this book leveraged were the Data Analysis Pack and Excel Solver. I had previously been aquanted with Microsoft Solver Foundation - this is a full fledged API for solving optimization problems, and went beyond being a mere Excel plugin - it exposed a C# programmatic interface for in process and a web service interface for out of process integration. were they the same? apparently not!   2 different solver frameworks for Excel: http://www.solver.com/index.html http://www.solverfoundation.com/ I contacted both vendors to get their perspectives.   Heres what the Excel Solver guys had to say:   "The Solver Foundation requires you to learn and use a very specific modeling language (OML). The Excel solver allows you to formulate your optimization problems without learning any new language simply by entering the formulas into cells on the Excel spreadsheet, something that nearly everyone is already familiar with doing.   The Excel Solver also allows you to seamlessly upgrade to products that combine Monte Carlo Simulation capabilities (our Risk Solver Premium and Risk Solver Platform products) which allow you to include uncertainty into your models when appropriate.   Our advanced Excel Solver Products also have a number of built in reporting tools for advanced analysis of the your model and it's results"           And Heres what the Microsoft Solver Foundation guys had to say:   "  With the release of Solver Foundation 3.0, Solver Foundation has the same kinds of solvers (plus a few more) than what is found in Excel Solver. I think there are two main differences:   1.      Problems are described differently. In Excel Solver the goals and constraints are specified inside the spreadsheet, in formulas. In Solver Foundation they are described either in .Net code that uses the Solver Foundation Services API, or using the OML modeling language in Excel. 2.      Solver Foundation’s primary strength is on solving large linear, mixed integer, and constraint models. That is, models that contain arbitrary nonlinear functions (such as trig functions, IF(), powers, etc) are handled a bit better by the Excel Solver at this point. "

    Read the article

  • Workaround: XNA 4 importing only part of 3d model from FBX

    - by Vitus
    Recently I found a problem with importing 3D models from FBX files: it sometimes imported partly. That is when you draw a 3D model, loaded from FBX file, processed by content pipeline, you got only part of meshes. “Sometimes” means that you got this error only for some files. Results of my research below. For example, I have 10Mb binary FBX file with a model, looks like: And when I load it, result Model instance contains only part of meshes and looks like: Because models from other files imported normally, I think that it’s a “bad format” file. When you add FBX file to your XNA Content project and build it, imported file processing by XNA Fbx Importer & Processor. On MSDN I found that FbxImporter designed to work with 2006.11 version of FBX format. My file is FBX 2012 format. Ok, I need to convert it to 2006 format. It can be done by using Autodesk FBX Converter 2012.1. I tried to convert it to other versions of FBX formats, but without success. And I also tried to import my FBX file to 3D MAX, and it imported correctly. Then I export model using 3D MAX, and it generate me other FBX, which I add to my XNA project. After that I got full model, that rendered well! So, internal data structure of FBX file is more important for right XNA import, than it version! Unfortunately, Autodesk FBX is not an open file format. If you want to work with FBX, you should use Autodesk FBX SDK. This way you can manually read content of FBX file, and use it everyway. Then I tried to convert my source FBX file to DAE Collada, and result DAE file back to FBX, using FBX Converter (FBX –> DAE –> FBX). The result FBX file can be imported normally.   Conclusion: XNA FbxImporter correct work doesn't depend on version (2006, 2011, etc) and form (binary, ascii) of FBX file. Internal FBX data structure much more important. To make FBX "readable" for XNA Importer you can use double conversion like FBX -> Collada -> FBX You also can use FBX SDK to manually load data from FBX P.S. Autodesk FBX Converter 2012 is more, than simple converter. It provide you tools like: FBX Explorer, which show you structure of FBX file; FBX Viewer, which render content of FBX and provide basic intercation like model move and zoom; FBX Take Manager, which allow to work with embedded animations

    Read the article

  • Objective-C Moving UIView along a curved path

    - by PruitIgoe
    I'm not sure if I am approaching this the correct way. In my app, when a user touches the screen I capture the point and create an arc from a fixed point to that touch point. I then want to move a UIView along that arc. Here's my code: ViewController.m //method to "shoot" object - KIP_Projectile creates the arc, KIP_Character creates the object I want to move along the arc ... //get arc for trajectory KIP_Projectile* vThisProjectile = [[KIP_Projectile alloc] initWithFrame:CGRectMake(51.0, fatElvisCenterPoint-30.0, touchPoint.x, 60.0)]; vThisProjectile.backgroundColor = [UIColor clearColor]; [self.view addSubview:vThisProjectile]; ... KIP_Character* thisChar = [[KIP_Character alloc] initWithFrame:CGRectMake(51, objCenterPoint-5, imgThisChar.size.width, imgThisChar.size.height)]; thisChar.backgroundColor = [UIColor clearColor]; thisChar.charID = charID; thisChar.charType = 2; thisChar.strCharType = @"Projectile"; thisChar.imgMyImage = imgThisChar; thisChar.myArc = vThisProjectile; [thisChar buildImage]; [thisChar traceArc]; in KIP_Projectile I build the arc using this code: - (CGMutablePathRef) createArcPathFromBottomOfRect : (CGRect) rect : (CGFloat) arcHeight { CGRect arcRect = CGRectMake(rect.origin.x, rect.origin.y + rect.size.height - arcHeight, rect.size.width, arcHeight); CGFloat arcRadius = (arcRect.size.height/2) + (pow(arcRect.size.width, 2) / (8*arcRect.size.height)); CGPoint arcCenter = CGPointMake(arcRect.origin.x + arcRect.size.width/2, arcRect.origin.y + arcRadius); CGFloat angle = acos(arcRect.size.width / (2*arcRadius)); CGFloat startAngle = radians(180) + angle; CGFloat endAngle = radians(360) - angle; CGMutablePathRef path = CGPathCreateMutable(); CGPathAddArc(path, NULL, arcCenter.x, arcCenter.y, arcRadius, startAngle, endAngle, 0); return path; } - (void)drawRect:(CGRect)rect { CGContextRef currentContext = UIGraphicsGetCurrentContext(); _myArcPath = [self createArcPathFromBottomOfRect:self.bounds:30.0]; CGContextSetLineWidth(currentContext, 1); CGFloat red[4] = {1.0f, 0.0f, 0.0f, 1.0f}; CGContextSetStrokeColor(currentContext, red); CGContextAddPath(currentContext, _myArcPath); CGContextStrokePath(currentContext); } Works fine. The arc is displayed with a red stroke on the screen. In KIP_Character, which has been passed it's relevant arc, I am using this code but getting no results. - (void) traceArc { CGMutablePathRef myArc = _myArc.myArcPath; // Set up path movement CAKeyframeAnimation *pathAnimation = [CAKeyframeAnimation animationWithKeyPath:@"position"]; pathAnimation.calculationMode = kCAAnimationPaced; pathAnimation.fillMode = kCAFillModeForwards; pathAnimation.removedOnCompletion = NO; pathAnimation.path = myArc; CGPathRelease(myArc); [self.layer addAnimation:pathAnimation forKey:@"savingAnimation"]; } Any help here would be appreciated.

    Read the article

  • OpenGL - have object follow mouse

    - by kevin james
    I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is: Get the coordinates within the window with glfwGetCursorPos. The window was created with window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL); and the code to get coordinates is double xpos, ypos; glfwGetCursorPos(window, &xpos, &ypos); Next, I use GLM unproject, to get the coordinates in "object space" glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f); glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f); glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport); There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear. Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, probably because the position vector I pass into unproject always has a value of 0.0 for z. Edit: The (incorrectly) unprojected x-values range from about -0.552 to 0.552, and the y-values from about -0.411 to 0.411.

    Read the article

  • How to Disable the New Geolocation Feature in Google Chrome

    - by Asian Angel
    The latest release of Google Chrome has geolocation enabled by default, and if you are worried about privacy or just don’t want websites to prompt you for your location, we’ve got the quick details on how to turn it off. Readers should note that the new Geolocation feature doesn’t give out your details by default, so don’t panic. It’s also only active, at the time of this writing, in the Dev channel builds of Chrome—so if you are using the regular stable build this feature won’t arrive for a while anyway. Note: If you’re a Firefox user, be sure to check out our guide to disabling geolocation in Firefox 3.x. What’s this Geolocation Feature About? Geolocation is a way for your browser to tell a website about your physical location, so you can get results tailored to where you actually are—for example, if you visited Google Maps it can ask you for your location to give you an accurate picture of where you are. To use this feature in Google Maps, you would click on the small white icon to activate the feature. As soon as you have clicked on the small white icon, a thin green toolbar will appear at the top of the webpage, asking to Allow or Deny.   How to Turn Chrome’s Geolocation Off If you want to turn geolocation off you will need to open the “Chrome Options Window”, navigate to the third tab, and click on the “Content settings… ” button. When the “Content Settings Window” opens go to the “Location Tab” and select “Do not allow any site to track my physical location”. Once that is done close out the “Content Settings & Chrome Options Windows”. When you go back to Google Maps and try using the small white icon again this is the message that you will see at the top of the page. Now that is much better! If you are unhappy with geolocation being enabled by default in the latest Dev Channel release then this will help get the problem sorted out nicely. Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow To Disable Individual Plug-ins in Google ChromeStop YouTube Videos from Automatically Playing in ChromeDisable YouTube Comments while using ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff

    Read the article

  • System Variables, Stored Procedures or Functions for Meta Data

    - by BuckWoody
    Whenever you want to know something about SQL Server’s configuration, whether that’s the Instance itself or a database, you have a few options. If you want to know “dynamic” data, such as how much memory or CPU is consumed or what a particular query is doing, you should be using the Dynamic Management Views (DMVs) that you can read about here: http://msdn.microsoft.com/en-us/library/ms188754.aspx  But if you’re looking for how much memory is installed on the server, the version of the Instance, the drive letters of the backups and so on, you have other choices. The first of these are system variables. You access these with a SELECT statement, and they are useful when you need a discrete value for use, say in another query or to put into a table. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms173823.aspx You also have a few stored procedures you can use. These often bring back a lot more data, pre-formatted for the screen. You access these with the EXECUTE syntax. It is a bit more difficult to take the data they return and get a single value or place the results in another table, but it is possible. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms187961.aspx Yet another option is to use a system function, which you access with a SELECT statement, which also brings back a discrete value that you can use in a test or to place in another table. You can read about those here: http://msdn.microsoft.com/en-us/library/ms187812.aspx  By the way, many of these constructs simply query from tables in the master or msdb databases for the Instance or the system tables in a user database. You can get much of the information there as well, and there are even system views in each database to show you the meta-data dealing with structure – more on that here: http://msdn.microsoft.com/en-us/library/ms186778.aspx  Some of these choices are the only way to get at a certain piece of data. But others overlap – you can use one or the other, they both come back with the same data. So, like many Microsoft products, you have multiple ways to do the same thing. And that’s OK – just research what each is used for and how it’s intended to be used, and you’ll be able to select (pun intended) the right choice. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • MySQL and Hadoop Integration - Unlocking New Insight

    - by Mat Keep
    “Big Data” offers the potential for organizations to revolutionize their operations. With the volume of business data doubling every 1.2 years, analysts and business users are discovering very real benefits when integrating and analyzing data from multiple sources, enabling deeper insight into their customers, partners, and business processes. As the world’s most popular open source database, and the most deployed database in the web and cloud, MySQL is a key component of many big data platforms, with Hadoop vendors estimating 80% of deployments are integrated with MySQL. The new Guide to MySQL and Hadoop presents the tools enabling integration between the two data platforms, supporting the data lifecycle from acquisition and organisation to analysis and visualisation / decision, as shown in the figure below The Guide details each of these stages and the technologies supporting them: Acquire: Through new NoSQL APIs, MySQL is able to ingest high volume, high velocity data, without sacrificing ACID guarantees, thereby ensuring data quality. Real-time analytics can also be run against newly acquired data, enabling immediate business insight, before data is loaded into Hadoop. In addition, sensitive data can be pre-processed, for example healthcare or financial services records can be anonymized, before transfer to Hadoop. Organize: Data is transferred from MySQL tables to Hadoop using Apache Sqoop. With the MySQL Binlog (Binary Log) API, users can also invoke real-time change data capture processes to stream updates to HDFS. Analyze: Multi-structured data ingested from multiple sources is consolidated and processed within the Hadoop platform. Decide: The results of the analysis are loaded back to MySQL via Apache Sqoop where they inform real-time operational processes or provide source data for BI analytics tools. So how are companies taking advantage of this today? As an example, on-line retailers can use big data from their web properties to better understand site visitors’ activities, such as paths through the site, pages viewed, and comments posted. This knowledge can be combined with user profiles and purchasing history to gain a better understanding of customers, and the delivery of highly targeted offers. Of course, it is not just in the web that big data can make a difference. Every business activity can benefit, with other common use cases including: - Sentiment analysis; - Marketing campaign analysis; - Customer churn modeling; - Fraud detection; - Research and Development; - Risk Modeling; - And more. As the guide discusses, Big Data is promising a significant transformation of the way organizations leverage data to run their businesses. MySQL can be seamlessly integrated within a Big Data lifecycle, enabling the unification of multi-structured data into common data platforms, taking advantage of all new data sources and yielding more insight than was ever previously imaginable. Download the guide to MySQL and Hadoop integration to learn more. I'd also be interested in hearing about how you are integrating MySQL with Hadoop today, and your requirements for the future, so please use the comments on this blog to share your insights.

    Read the article

  • Fast programmatic compare of "timetable" data

    - by Brendan Green
    Consider train timetable data, where each service (or "run") has a data structure as such: public class TimeTable { public int Id {get;set;} public List<Run> Runs {get;set;} } public class Run { public List<Stop> Stops {get;set;} public int RunId {get;set;} } public class Stop { public int StationId {get;set;} public TimeSpan? StopTime {get;set;} public bool IsStop {get;set;} } We have a list of runs that operate against a particular line (the TimeTable class). Further, whilst we have a set collection of stations that are on a line, not all runs stop at all stations (that is, IsStop would be false, and StopTime would be null). Now, imagine that we have received the initial timetable, processed it, and loaded it into the above data structure. Once the initial load is complete, it is persisted into a database - the data structure is used only to load the timetable from its source and to persist it to the database. We are now receiving an updated timetable. The updated timetable may or may not have any changes to it - we don't know and are not told whether any changes are present. What I would like to do is perform a compare for each run in an efficient manner. I don't want to simply replace each run. Instead, I want to have a background task that runs periodically that downloads the updated timetable dataset, and then compares it to the current timetable. If differences are found, some action (not relevant to the question) will take place. I was initially thinking of some sort of checksum process, where I could, for example, load both runs (that is, the one from the new timetable received and the one that has been persisted to the database) into the data structure and then add up all the hour components of the StopTime, and all the minute components of the StopTime and compare the results (i.e. both the sum of Hours and sum of Minutes would be the same, and differences introduced if a stop time is changed, a stop deleted or a new stop added). Would that be a valid way to check for differences, or is there a better way to approach this problem? I can see a problem that, for example, one stop is changed to be 2 minutes earlier, and another changed to be 2 minutes later would have a net zero change. Or am I over thinking this, and would it just be simpler to brute check all stops to ensure that The updated run stops at the same stations; and Each stop is at the same time

    Read the article

  • How (recipe) to build only one kernel module?

    - by Pro Backup
    I have a bug in a Linux kernel module that causes the stock Ubuntu 14.04 kernel to oops (crash). That is why I want to edit/patch the source of only that single kernel module to add some extra debug output. The kernel module in question is mvsas and not necessary to boot. For that reason I don't see any need to update any initrd images. I have read a lot of information (as shown below) and find the setup and build process confusion. I need two recipes: to setup/configure the build environment once steps to do after editing any source file of this kernel module (.c and .h) and converting that edit into a new kernel module (.ko) The sources that have been used are: build one kernel module - Google search http://www.linuxquestions.org/questions/linux-kernel-70/rebuilding-a-single-kernel-module-595116/ http://stackoverflow.com/questions/8744087/how-to-recompile-just-a-single-kernel-module http://www.pixelbeat.org/docs/rebuild_kernel_module.html How do I build a single in-tree kernel module? http://ubuntuforums.org/showthread.php?t=1153067 http://ubuntuforums.org/showthread.php?t=2112166 http://ubuntuforums.org/showthread.php?t=1115593 build one kernel module ubuntu - Google search 'make +single +kernel +module' - Ask Ubuntu 'make +kernel +module' - Ask Ubuntu My makefile results in: No rule to make target `arch/x86/tools/relocs.c', needed '"Invalid module format"' - Ask Ubuntu Driver installation: compiling source code for newer kernel Modprobe: 'Invalid nodule format', yet works after insmod "Symbol version dump" "is missing" - Google search http://stackoverflow.com/questions/9425523/should-i-care-that-the-symbol-version-dump-is-missing-how-do-i-get-one Where can I find the corresponding Module.symvers and .config files for 12.04.3 i386 server? "no symbol version for module_layout" when trying to load usbhid.ko Broken links inside Linux header file folder 'make modules_install' - Ask Ubuntu 'modules_install' - Ask Ubuntu Empty build directory in custom compiled kernel Not able to see pr_info output In which directory are the kernel source files and how can I recompile it? How can I compile and install that patched libata-eh.c file? 'modules_install +depmod' - Ask Ubuntu modules_install depmod - Google search "make modules_install" - Google search http://www.csee.umbc.edu/courses/undergraduate/CMSC421/fall02/burt/projects/howto_build_kernel.html http://unix.stackexchange.com/questions/20864/what-happens-in-each-step-of-the-linux-kernel-building-process https://wiki.ubuntu.com/KernelCustomBuild http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html http://www.linuxforums.org/forum/kernel/170617-solved-make-modules_install-different-path.html "make prepare" - Google search "make prepare" "scripts/kconfig/conf --silentoldconfig Kconfig" - Google search http://ubuntuforums.org/showthread.php?t=1963515 ubuntu "make prepare" version - Google search http://stackoverflow.com/questions/8276245/how-to-compile-a-kernel-module-against-a-new-source https://help.ubuntu.com/community/Kernel/Compile How do I compile a kernel module? How to add a custom driver to my kernel? Compile and loading kernel module without compiling the kernel

    Read the article

  • Calling Web Services using ADF 11g

    - by James Taylor
    One of the benefits of ADF is that fact that it can use multiple data sources. With SOA playing a big part in today’s IT landscape, applications need to be able to utilise this SOA framework to leverage functionality from multiple systems to provide a composite application. ADF provides functionality to expose web services via the ADF Business Component so if you know how to use Business Components for a database. Configuring ADF for web services is much the same. In this example I use an OSB web service that gets a customer. Create a new Fusion Web Application (ADF) Application and click OK    Provide an Application Name, GetCustomerADF and click Next    From the Project Technologies move Web Services into the Selected box. Accept the defaults and click Finish. Right-click the Model project and select New In the Gallery select Web Services –> Web Service Data Control then click OK. Provide a name GetCustomerDC and give the URL endpoint for the Web Service, then click Next Select the web service operation you want to use for the ADF application. In my example my web service only has one operation. Click Finish Save your work, File –> Save The data control has now been created, the next steps create the UI components. In your application created in step 1 find the ViewController project, right-click and choose New In the Gallery select JSF –> JSF Page Provide a name for the jsp page, GetCustomer, Also ensure that the ‘Create as XML Document (*.jsp) check box is checked. I have selected the page template, Oracle Three Column Layout but you can create a layout of your choice. I only want 2 columns so I delete the last column but right-clicking the right had panel and selecting Delete Drag the fields you require from the web service data control to the left pannel. In my example I only require the Customer ID. When you drag to the panel select Texts –>ADF Input Text w/Label In this example I want to search on a customer based on the ID. So Once I select the ID I want to execute the request. To do this I need a button. Drag the operation object under the fields created in step 15. Select Methods –> ADF Button. You now need to provide the mappings, Choose the ‘Show EI Expression Builder’ Navigate to the bindings, ADFBindings –> bindings –> parametersIterator –> currentRow Click OK Drag and drop the return information I just want the results shown in a form. I want to show all fields Now it is time to test, Right-click the jspx page created in steps 11 – 21 and select Run A browser should start, enter valid values and test  

    Read the article

  • Is commented out code really always bad?

    - by nikie
    Practically every text on code quality I've read agrees that commented out code is a bad thing. The usual example is that someone changed a line of code and left the old line there as a comment, apparently to confuse people who read the code later on. Of course, that's a bad thing. But I often find myself leaving commented out code in another situation: I write a computational-geometry or image processing algorithm. To understand this kind of code, and to find potential bugs in it, it's often very helpful to display intermediate results (e.g. draw a set of points to the screen or save a bitmap file). Looking at these values in the debugger usually means looking at a wall of numbers (coordinates, raw pixel values). Not very helpful. Writing a debugger visualizer every time would be overkill. I don't want to leave the visualization code in the final product (it hurts performance, and usually just confuses the end user), but I don't want to loose it, either. In C++, I can use #ifdef to conditionally compile that code, but I don't see much differnce between this: /* // Debug Visualization: draw set of found interest points for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); */ and this: #ifdef DEBUG_VISUALIZATION_DRAW_INTEREST_POINTS for (int i=0; i<count; i++) DrawBox(pts[i].X, pts[i].Y, 5,5); #endif So, most of the time, I just leave the visualization code commented out, with a comment saying what is being visualized. When I read the code a year later, I'm usually happy I can just uncomment the visualization code and literally "see what's going on". Should I feel bad about that? Why? Is there a superior solution? Update: S. Lott asks in a comment Are you somehow "over-generalizing" all commented code to include debugging as well as senseless, obsolete code? Why are you making that overly-generalized conclusion? I recently read Robert Glass' "Clean Code", which says: Few practices are as odious as commenting-out code. Don't do this!. I've looked at the paragraph in the book again (p. 68), there's no qualification, no distinction made between different reasons for commenting out code. So I wondered if this rule is over-generalizing (or if I misunderstood the book) or if what I do is bad practice, for some reason I didn't know.

    Read the article

  • How valuable are you to your organization?

    - by Lance Shaw
    I don't know about you but I find it easy to get bogged down with the daily list of tasks and deliverables.  We all have lots to do and it all seems to be due tomorrow.  If you are reading this blog, than your to-do list is almost certainly filled with tasks related to the management, processing and publishing of information.  As we get mired in the daily routine of making sure that the content management needs of the organizations are met, we can easily lose sight of the value that we bring.  After all, if information and content is the lifeblood of our organizations, then surely maintaining the healthy flow of that information has real value.  But how can you measure that value and bring it forward on your résumé or your list of achievements in time for your next performance review? The AIIM organization has spent a lot of time recently researching the value of certification for "information professionals".  When it comes to enterprise content management (ECM) there are many areas of specialization including records management, content archivist, digital asset manager, content librarian and more.  Specialization can clearly drive up your value but it can also lock you into a narrow niche area of focus.  AIIM has found that what companies also need is someone that can apply their knowledge of how information is managed within the operational scope of the business in order to drive real, measurable strategic value.  When you can showcase the value of a broader, business-wide mindset to your management, you have more opportunity to make professional progress and drive real growth where it counts, your paycheck.   We here on the Oracle WebCenter team partnered with AIIM on the research they performed around the value of an information professional certification program. In a webinar this week, Doug Miles of AIIM and I will be talking about the results of that recent survey and what it is going to mean in the future to be recognized as a "Certified Information Professional" (CIP).  Oracle sponsored this research to help individuals and companies understand the value of enterprise content management and what it means across the entire organization. I hope you will join us. If any of us were stopped in the street and were asked about it, I bet most of us would think of ourselves as an "Information Professional".  Now we have a way to actually prove it!  There's only one downside that I can see...  you will have to get your business cards updated to include the "CIP" acronym after your name.  I think you will agree that is a price worth paying!

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >