Search Results

Search found 23853 results on 955 pages for 'c standard library'.

Page 155/955 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Ruby 1.8.7 compatibility

    - by Sebastian
    I had a exception when I switch to Ruby 1.8.7 on Snow Leopard ArgumentError: wrong number of arguments (1 for 0) /Library/Ruby/Gems/1.8/gems/activerecord-1.15.5/lib/active_record/connection_adapters/abstract/quoting.rb:27:in 'to_s' /Library/Ruby/Gems/1.8/gems/activerecord-1.15.5/lib/active_record/connection_adapters/abstract/quoting.rb:27:in 'quote' /Library/Ruby/Gems/1.8/gems/activerecord-1.15.5/lib/active_record/connection_adapters/mysql_adapter.rb:190:in 'quote' /Library/Ruby/Gems/1.8/gems/activerecord-1.15.5/lib/active_record/base.rb:2042:in 'quote_value' /Library/Ruby/Gems/1.8/gems/activerecord-1.15.5/lib/active_record/base.rb:2034:in 'attributes_with_quotes' Application uses Rails 1.2.5: there is no chance to update rails in this app. Somebody have solution?

    Read the article

  • mysql gem for snow leopard

    - by Will
    I had trouble with the gem at first but got it to work when I installed the 64-bit MySQL and reinsatlled the gem with arch flags. So it work in rails. The error I used to get was uninitialized constant MysqlCompat::MysqlRes but that is now gone :) However in Xcode when I run a RubyCocoa project I still get the old error of uninitialized constant MysqlCompat::MysqlRes Does anyone know why this may be? Is it because the gdb is 64-bit? How can it work in Rails but not in RubyCocoa? A little debugging shows that it fails to load mysql_api.bundle /Library/Ruby/Gems/1.8/gems/mysql-2.8.1/lib/mysql_api.bundle: dlopen(/Library/Ruby/Gems/1.8/gems/mysql-2.8.1/lib/mysql_api.bundle, 9): no suitable image found. Did find: (LoadError) /Library/Ruby/Gems/1.8/gems/mysql-2.8.1/lib/mysql_api.bundle: mach-o, but wrong architecture - /Library/Ruby/Gems/1.8/gems/mysql-2.8.1/lib/mysql_api.bundle from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require'

    Read the article

  • Java and gstreamer-java initialisation error

    - by Mark
    I am building a small app which will play streaming audio from the internet in java (mainly internet radio stations). I have decided to use the gstreamer-java library for the sound, which uses JNA. I would like to include a check in the code, to see whether the gstreamer library has been initialised. When I have left the "Gst.init()" code out (to mimic when the library has not been initialised correctly), the application throws out the following messages: (process:21888): GLib-GObject-CRITICAL **: /build/buildd/glib2.0-2.22.3/gobject/gtype.c:2458: initialization assertion failed, use IA__g_type_init() prior to this function (process:21888): GLib-CRITICAL **: g_once_init_leave: assertion `initialization_value != 0' failed The app calls the gstreamer-java library. The error messages appear but the thread continues to run, hogging the CPU. Is there any way to catch the error or to add a check to prevent it from happening? An alternative would be to put the "Gst.init()" in the main class, but I am not sure if this would always guarantee the gstreamer library is initialised.

    Read the article

  • Django + WebKit = Broken pipe

    - by Saosin
    I'm running the Django 1.2 development server and I get these Broken Pipe error messages whenever I load a page from it with Chrome or Safari. My co-worker is getting the error as well when he loads a page from his dev server. We don't have these errors when using Opera or Firefox. Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/django/core/servers/basehttp.py", line 281, in run self.finish_response() File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/django/core/servers/basehttp.py", line 321, in finish_response self.write(data) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/django/core/servers/basehttp.py", line 417, in write self._write(data) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 300, in write self.flush() File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 286, in flush self._sock.sendall(buffer) error: [Errno 32] Broken pipe Can anyone help me out? I'm going crazy over this!

    Read the article

  • Textmate cucumber bundle issues - 'Run Feature' producing errors.

    - by Evolve
    From a cucumber feature file when I go to 'Run features' Im getting the error below in the popup box that appears. How do I fix this? /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in gem_original_require': no such file to load -- /Users/evolve/Projects/i9/Tornelo/.bundle/environment (LoadError) from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:inrequire' from /Users/evolve/Library/Application Support/TextMate/Bundles/Cucumber.tmbundle/Support/lib/cucumber/mate/../mate.rb:10 from /Users/evolve/Library/Application Support/TextMate/Bundles/Cucumber.tmbundle/Support/lib/cucumber/mate/feature_helper.rb:1:in require' from /Users/evolve/Library/Application Support/TextMate/Bundles/Cucumber.tmbundle/Support/lib/cucumber/mate/feature_helper.rb:1 from /tmp/cucumber-906.rb:2:inrequire' from /tmp/cucumber-906.rb:2

    Read the article

  • libtool failed with exit code 1 on XCode 4.3

    - by Doz
    I have XCode 4.3 and I am getting this frustrating xml-lib related error. I have a feeling its because of the fact that 4.3 is not using /Developer folder but instead the /Applications/XCode.app/... The error message is below: Libtool /Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Products/Release-iphonesimulator/RWEngines.framework/Versions/A/RWEngines normal i386 cd /Users/dkatz/Sites/xCode/RWA/RWEngines setenv MACOSX_DEPLOYMENT_TARGET 10.6 setenv PATH "/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/libtool -static -arch_only i386 -syslibroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator5.0.sdk -L/Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Products/Release-iphonesimulator -filelist /Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Intermediates/RWEngines.build/Release-iphonesimulator/RWEngines.build/Objects-normal/i386/RWEngines.LinkFileList -ObjC -Xlinker -no_implicit_dylibs -D__IPHONE_OS_VERSION_MIN_REQUIRED=50000 -framework UIKit /Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Products/Release-iphonesimulator/libCorePlot-CocoaTouch.a -framework SenTestingKit -framework QuartzCore -framework Foundation -framework RWCommon -o /Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Products/Release-iphonesimulator/RWEngines.framework/Versions/A/RWEngines And the actual error: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/libtool failed with exit code 1 Thanks guys!

    Read the article

  • How to get repository for core-plot

    - by Omar
    I am not able to get the repository for core-plot. What I am doing is that I am typing this in the terminal: hg clone https://core-plot.googlecode.com/hg/ core-plot and this is what I get: Traceback (most recent call last): File "/usr/local/bin/hg", line 25, in mercurial.util.set_binary(fp) File "/Library/Python/2.5/site-packages/mercurial/demandimport.py", line 75, in __getattribute__ self._load() File "/Library/Python/2.5/site-packages/mercurial/demandimport.py", line 47, in _load mod = _origimport(head, globals, locals) File "/Library/Python/2.5/site-packages/mercurial/util.py", line 93, in _encoding = locale.getlocale()[1] File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/locale.py", line 460, in getlocale return _parse_localename(localename) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/locale.py", line 373, in _parse_localename raise ValueError, 'unknown locale: %s' % localename ValueError: unknown locale: UTF-8 I can't seem to get it to install. Please give me guidance on how to install the repository.

    Read the article

  • Test for external undefined references in Linux

    - by Charles
    Is there a built in linux utility that I can use to test a newly compiled shared library for external undefined references? Gcc seems to be intelligent enough to check for undefined symbols in my own binary, but if the symbol is a reference to another library gcc does not check at link time. Instead I only get the message when I try to link to my new library from another program. It seems a little silly to get undefined reference messages in a library when I am compiling a different project so I want to know if I can do a check on all references internal and external when I build the library not when I link to it. Example error: make -C UnitTests debug make[1]: Entering directory `~/projects/Foo/UnitTests` g++ [ tons of objects ] -L../libbar/bin -lbar -o UnitTests libbar.so: undefined reference to `DoSomethingFromAnotherLibrary` collect2: ld returned 1 exit status make[1]: *** [~/projects/Foo/UnitTests] Error 1

    Read the article

  • Nested svn repositories

    - by singles
    I got a "Project A" in repository. But in that project I'm using a library, which is hosted on Google Code. There is my question: is there any way, to have that library files "hooked" to Google Code SVN, and simultaneously my project in my repo (it's parent to that library), so I can commit library files into my repository when I decide, that outer project revision is ok? I've tried to do checkout in the library folder, files were downloaded from Google's Code repository. But I that case wasn't able to add them to my repository - they weren't visible in "Add" window.

    Read the article

  • Test for undefined references in Linux

    - by Charles
    Is there a built in linux utility that I can use to test a newly compiled shared library for external undefined references? Gcc seems to be intelligent enough to check for undefined symbols in my own binary, but if the symbol is a reference to another library gcc does not check at link time. Instead I only get the message when I try to link to my new library from another program. It seems a little silly to get undefined reference messages in a library when I am compiling a different project so I want to know if I can do a check on all references internal and external when I build the library not when I link to it. Example error: make -C UnitTests debug make[1]: Entering directory `~/projects/Foo/UnitTests` g++ [ tons of objects ] -L../libbar/bin -lbar -o UnitTests libbar.so: undefined reference to `DoSomethingFromAnotherLibrary` collect2: ld returned 1 exit status make[1]: *** [~/projects/Foo/UnitTests] Error 1

    Read the article

  • How to tell the MinGW linker not to export all symbols?

    - by James R.
    Hello, I'm building a Windows dynamic library using the MinGW toolchain. To build this library I'm statically linking to other 2 which offer an API and I have a .def file where I wrote the only symbol I want to be exported in my library. The problem is that GCC is exporting all of the symbols including the ones from the libraries I'm linking to. Is there anyway to tell the linker just to export the symbols in the def file? I know there is the option --export-all-symbols but there seems not to be the opposite to it. Right now the last line of the build script has this structure: g++ -shared CXXFLAGS DEFINES INCLUDES -o library.dll library.cpp DEF_FILE \ OBJECT_FILES LIBS -Wl,--enable-stdcall-fixup EDIT: In the docs about the linker it says that --export-all-symbols is the default behavior and that it's disabled when you don't use that option explicitly if you provide a def file, except when it doesn't; the symbols in 3rd party libs are being exported anyway. EDIT: Adding the option --exclude-libs LIBS doesn't keep their symbols from being exported either.

    Read the article

  • Unable to open executable - xcode

    - by Filipe Mota
    I'm getting this error...any idea how to solve it? GenerateDSYMFile /Users/fmota/Library/Developer/Xcode/DerivedData/PBTest-gvudadeakgzklbekugyiqyfyprlt/Build/Products/Debug-iphonesimulator/PBTest.app.dSYM /Users/fmota/Library/Developer/Xcode/DerivedData/PBTest-gvudadeakgzklbekugyiqyfyprlt/Build/Products/Debug-iphonesimulator/PBTest.app/PBTest cd /Users/fmota/Documents/Developer/Protobuf/PBTest setenv PATH "/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/usr/bin/dsymutil /Users/fmota/Library/Developer/Xcode/DerivedData/PBTest-gvudadeakgzklbekugyiqyfyprlt/Build/Products/Debug-iphonesimulator/PBTest.app/PBTest -o /Users/fmota/Library/Developer/Xcode/DerivedData/PBTest-gvudadeakgzklbekugyiqyfyprlt/Build/Products/Debug-iphonesimulator/PBTest.app.dSYM error: unable to open executable '/Users/fmota/Library/Developer/Xcode/DerivedData/PBTest-gvudadeakgzklbekugyiqyfyprlt/Build/Products/Debug-iphonesimulator/PBTest.app/PBTest'

    Read the article

  • Using Maven to build JAR from source in Subversion trunk

    - by Anonymouse
    There's a Java library that I would like to use in my project. My project uses Maven to pull in dependencies and it works great for everything except this one library. The problem is, this library never has releases. The author maintains the source in a Subversion repository and only makes changes in trunk. Is there a way I can tell Maven to Update (or check out) the library's source tree from Subversion Build it according to its POM Use the resulting jar as a dependency for this project Do this regularly (possibly at each build) For bonus points, mark which Subversion revision of that library I want to use Thanks!

    Read the article

  • Partial coverage of a return statement in C++/CLI

    - by brickner
    I have C++/CLI code and I'm using Visual Studio 2008 Team Suite Code Coverage. The code header: // Library.h #pragma once #include <string> using namespace System; namespace Library { public ref class MyClass { public: static void MyFoo(); static std::string Foo(); }; } The code implementation: #include "Library.h" using namespace Library; using namespace System; void MyClass::MyFoo() { Foo(); } std::string MyClass::Foo() { return std::string(); } I have a C# unit test, that calls MyClass.MyFoo(): [TestMethod] public void TestMethod1() { Library.MyClass.MyFoo(); } For some reason, I don't get a full code coverage for MyClass. The Foo() method has 3 uncovered blocks and 5 covered blocks. The closing curly brackets (}) are marked in orange - partially covered. I have no idea why is it partially covered instead of fully covered, and this is my question.

    Read the article

  • Longest substring in a large set of strings

    - by user1516492
    I have a huge fixed library of text strings, and a frequently changing input string s. I need to find the longest matching substring from any string in the library to s, starting from the beginning of string s, in minimal time. In a perfect world, I would also return the next longest match from the library, and the next best, and so on. This is not the longest common string problem - I'm not looking for the longest common string for all the strings in the library... I just need a pairwise best substring between s and each string in the vast library as fast as possible.

    Read the article

  • Apply Skins to Add Some Flair to Windows Media Player 12

    - by DigitalGeekery
    Tired of the same look and feel of Windows Media Player in Windows 7? We’ll show you how to inject new life into your media experience by applying skins in WMP 12. Adding Skins In Library view, click on View from the Menu and select Skin Chooser. By default, WMP 12 comes with only a couple of modest skins. When you select a skin from the left pane, a preview will be displayed to the right. To apply one of the skins, simply select it from the pane on the left and click Apply Skin.   You can also switch to the currently selected skin in the Skin chooser by selecting Skin from the View menu, or by pressing Crtl + 2. Media Player will open in Now Playing mode. Click on the Switch to Library button at the top left to return to Library view.     Ok, so the included skins are a little boring. You can find additional skins by selecting Tools > Download > Skins.   Or, by clicking on More Skins from within the Skin chooser.   You will be taken the the Microsoft website where you can choose from dozens of skins to download and install. Select a skin you’d like to try and click the link to download.   If prompted with a warning message about files containing scripts that access your library, click Yes. Note: These warning boxes may look a bit different depending on your browser. We are using Chrome for this example.   Click on View Now.   Your new skin will be on display. To get back to the Library mode, find and click the Return to Full Mode button.    Some skins may launch video in a separate window.   If you want to delete one of the skins, select it from the list within the Skin chooser and click the red “X.” You can also press the delete key on your keyboard.   Then click Yes to confirm.   Conclusion Using skins is a quick and easy way to add some style to Windows Media Player and switching back and forth between skins is a breeze. Regardless of your interests, you are sure to find a skin that fits your tastes. You may find WMP skins on other sites, but sticking with Microsoft’s website will ensure maximum compatibility. Skins for Windows Media Player Similar Articles Productive Geek Tips Make VLC Player Look like Windows Media Player 10Make VLC Player Look like Windows Media Player 11Make VLC Player Look like Winamp 5 (Kinda)Fixing When Windows Media Player Library Won’t Let You Add FilesInstall and Use the VLC Media Player on Ubuntu Linux TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative

    Read the article

  • InfoPath 2010 Form Design and Web Part Deployment

    - by JKenderdine
    In January I had the pleasure to speak at SharePoint Saturday Virginia Beach.  I presented a session on InfoPath 2010 forms design which included some of the basics of Forms Design, description of some of the new options with InfoPath 2010 and SharePoint 2010, and other integration possibilities.  Included below is the information presented as well as the solution to create the demo: First thing you need to understand is what the difference is between an InfoPath List form and a Form Library Form?  SharePoint List Forms:  Store data directly in a SharePoint list.  Each control (e.g. text box) in the form is bound to a column in the list. SharePoint list forms are directly connected to the list, which means that you don’t have to worry about setting up the publish and submit locations. You also do not have the option for back-end code. Form Library Forms:  Store data in XML files in a SharePoint form library.  This means they are more flexible and you can do more with them.  For example, they can be configured to save drafts and submit to different locations. However, they are more complex to work with and require more decisions to be made during configuration.  You do have the option of back-end code with these type of forms. Next steps: You need to create your File Architecture Plan.  Plan the location for the saved template – both Test and Production (This is pretty much a given, but just in case - Always make sure to have a test environment) Plan for the location of the published template Then you need to document your Form Template Design Plan.  Some questions to ask to gather your requirements: What will the form be designed to do? Will it gather user information? Will it display data from a data source? Do we need to show different views to different users? What do we base this on? How will it be implemented for the users? Browser or Client based form Site collection content type – Published through Central Admin Form Library – Published directly to form library So what are the requirements for this template?  Business Card Request Form Template Design Plan Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Published directly to form library The form was published through Central Administration and incorporated into the site as a content type. Utilizing the new InfoPath Web part, the form is integrated into the page and the users can complete the form directly from within that page. For now, if you are interested in the final form XSN, contact me using the Contact link above.   I will post soon with the details on how the form was created and how it integrated the requirements detailed above.

    Read the article

  • Part 7: EBS Modifications and Flagged Files in R12

    - by volker.eckardt(at)oracle.com
    Let me, based on my previous blog, explain the procedure of flagged files a bit better and facilitate the same with screenshots. Flagged files is a concept within the Oracle eBusiness Suite (EBS) release 12, where you flag a standard deployment file, let’s say a Forms file, a Package or a Java class file. When you run the patch analyse, the list of flagged files will be checked and in case one of these files gets patched, the analyse report will tell you. Note: This functionality is also available in release 11, here it is implemented and known as “applcust.txt”. You can flag as many files as you want, in whatever relationship they are with your customizations. In addition to the flag itself you can add a comment. You should use this comment to point to your customization reference (here XXAR_RPT_066 or XXAP_CUST_030). Consider the following two cases: You have created your own report, based on a standard report. In this case you will flag the report file itself, and the key views used. When a patch updates one of these files, you will be informed and can initiate a proper review and testing. (ex.: first line for ARXCTA.rdf) You have created an extensive personalization and because it is business critical you like to be informed if the page definition gets updated. In this case you register the PG.xml file as flagged file. (ex.: second line below for CreateExtBankAcctPG.xml) The menu path to register flagged files is the following: (R) System Administrator > (M) Oracle Applications Manager > Site Map > Maintenance > Register Flagged Files     Your DBA should now run the Patch Analyse every time he is going to apply a new patch. (R) System Administrator > (M) Oracle Applications Manager > Patch Wizard > Task “Recommend/Analyze Patches” The screenshot above shows the impact summary. For this blog entry the number “2” titled “Flagged Files Changed“ is in our focus. When you click the “2” you will get a similar screen like the first in this blog, showing you exactly the files which will get patched if you continue and apply this patch in this environment right now. Note: It is also shown that just 20% of all patch files will get applied. This situation might be different in case your environments are on a different patch level. For sure also the customization impact might then be different. The flagging step can be done directly in the Oracle Applications Manager.  Our developers are responsible for. To transport such a flag+comment we use a FNDLOAD script. It is suggested to put the flagged files data file directly into your CEMLI patch. Herewith the flagged files registration will be executed right at the same time when the patch gets applied. Process Steps: Developer: Builds CEMLI Reviews code and identifies key standard objects referenced Determines standard object files and flags them Creates FNDLOAD file and adds the same to the CEMLI patch DBA: Executes for every new Oracle standard patch the patch analyse in a representative environment Checks and retrieves the flagged files and comments Sends flagged file list back to development team for analyse / retest Developer: Analyses / Updates / Retests effected CEMLIs Prerequisite: The patch analyse has to be executed in an environment where flagged files have been registered. (If you run the patch analyse in a vanilla or outdated environment (compared to your PROD), the analyse will not be so helpful!) When to start with Flagged files? Start right now utilizing this feature. It is an invest to improve the production stability and fulfil your SLA!   Summary Flagged Files is a very helpful EBS R12 technique when analysing patches. Implement a procedure within your development process to maintain such flags. Let the DBA run the patch analyse in an environment with a similar patch and customization level as your current production.   Related Links: EBS Patching Procedures - Chapter 2-13 - Registered Flagged Files

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-30

    - by Bob Rhubart
    Next Generation Mobile Clients for Oracle Applications & the role of Oracle Fusion Middleware | Manish Palaparthy Manish Palaparthy examines some of Oracle's mobile applications, and takes a look at the underlying technology. Master Data Management: A Foundation for Big Data Analysis | Manouj Tahiliani "Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data." — Manouj Tahiliani Architect Day: Boston - Agenda Update Here's the latest updated information on the session schedule and content for Oracle Technology Network Architect Day in Boston, MA on September 12, 2012. Registration is open, but seating is limited. OTN Architect Day: Boston is being held on Wednesday September 12, 2012, 8:00 a.m. – 5:00 p.m., at the Boston Marriott Burlington, One Burlington Mall Road, Burlington, MA 01803. Integrating Coherence & Java EE 6 Applications using ActiveCache | Ricardo Ferreira The seamless integration between Oracle Coherence and Oracle WebLogic Server "provides a comprehensive environment to develop applications without the complexity of extra Java code to manage cache as a dependency," explains Ricardo Ferreira, "since Oracle provides a DI (Dependency Injection) mechanism for Coherence, the same DI mechanism available in standard Java EE applications. This feature is called ActiveCache." Ricardo shows you how to configure ActiveCache in WebLogic and your Java EE application. Cloud Infrastructure has a new standard from the DMTF "Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, [Cloud Infrastructure Management Interface (CIMI)] is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience." Oracle GoldenGate 11g Release Launch Webcast- September 12 The new release of Oracle GoldenGate 11g is now available for major databases and platforms. Register for this webcast and live Q&A with product experts to learn about the solution's new features. September 12, 2012. 8:00am AM and 10:00AM PT. Speakers: Doug Reid (Director, Product Management, Oracle GoldenGate), Irem Radzik (Director, Product Marketing, Oracle Data Integration Products) Thought for the Day "[When] asking skilled architects…what they do when confronted with highly complex problems… [they] would most likely answer, 'Just use Common Sense.' [A] better expression than 'common sense' is 'contextual sense'—a knowledge of what is reasonable within a given context. Practicing architects through eduction, experience and examples accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • jtreg update, December 2012

    - by jjg
    There is a new version of jtreg available. The primary new feature is support for tests that have been written for use with TestNG, the popular open source testing framework. TestNG is supported by a variety of tools and plugins, which means that it is now possible to develop tests for OpenJDK using those tools, while still retaining the ability to have the tests be part of the OpenJDK test suite, and run with a single test harness, jtreg. jtreg can be downloaded from the OpenJDK jtreg page: http://openjdk.java.net/jtreg. TestNG support jtreg supports both single TestNG tests, which can be freely intermixed with other types of jtreg tests, and groups of TestNG tests. A single TestNG test class can be compiled and run by providing a test description using the new action tag: @run testng classname The test will be executed by using org.testng.TestNG. No main method is required. A group of TestNG tests organized in a standard package hierarchy can also be compiled and run by jtreg. Any such group must be identified by specifying the root directory of the package hierarchy. You can either do this in the top level TEST.ROOT file, or in a TEST.properties file in any subdirectory enclosing the group of tests. In either case, add a line to the file of the form: TestNG.dirs = dir ... Directories beginning with '/' are evaluated relative to the root directory of the test suite; otherwise they are evaluated relative to the directory containing the declaring file. In particular, note that you can simply use "TestNG.dirs = ." in a TEST.properties file in the root directory of the test group's package hierarchy. No additional test descriptions are necessary, but test descriptions containing information tags, such as @bug, @summary, etc are permitted. All the Java source files in the group will be compiled if necessary, before any of the tests in the group are run. The selected tests within the group will be run, one at a time, using org.testng.TestNG. Library classes The specification for the @library tag has been extended so that any paths beginning with '/' will be evaluated relative to the root directory of the test suite. In addition, some bugs have been fixed that prevented sharing the compiled versions of library classes between tests in different directories. Note: This has uncovered some issues in tests that use a combination of @build and @library tags, such that some tests may fail unexpectedly with ClassNotFoundException. The workaround for now is to ensure that library classes are listed before the test classes in any @build tags. To specify one or more library directories for a group of TestNG tests, add a line of the following form to the TEST.properties file in the root directory of the group's package hierarchy: lib.dirs = dir ... As before, directories beginning with '/' are evaluated relative to the root directory of the test suite; otherwise they are evaluated relative to the directory containing the declaring file. The libraries will be available to all classes in the group; you cannot specify different libraries for different tests within the group. Coming soon ... From this point on, jtreg development will be using the new jtreg repository in the OpenJDK code-tools project. There is a new email alias jtreg-dev at openjdk.java.net for discussions about jtreg development. The existing alias jtreg-use at openjdk.java.net will continue to be available for questions about using jtreg. For more information ... An updated version of the jtreg Tag Language Specification is being prepared, and will be made available when it is ready. In the meantime, you can find more information about the support for TestNG by executing the following command: $ jtreg -onlinehelp TestNG For more information on TestNG itself, visit testng.org.

    Read the article

  • Multiple Zend application code organisation

    - by user966936
    For the past year I have been working on a series of applications all based on the Zend framework and centered on a complex business logic that all applications must have access to even if they don't use all (easier than having multiple library folders for each application as they are all linked together with a common center). Without going into much detail about what the project is specifically about, I am looking for some input (as I am working on the project alone) on how I have "grouped" my code. I have tried to split it all up in such a way that it removes dependencies as much as possible. I'm trying to keep it as decoupled as I logically can, so in 12 months time when my time is up anyone else coming in can have no problem extending on what I have produced. Example structure: applicationStorage\ (contains all applications and associated data) applicationStorage\Applications\ (contains the applications themselves) applicationStorage\Applications\external\ (application grouping folder) (contains all external customer access applications) applicationStorage\Applications\external\site\ (main external customer access application) applicationStorage\Applications\external\site\Modules\ applicationStorage\Applications\external\site\Config\ applicationStorage\Applications\external\site\Layouts\ applicationStorage\Applications\external\site\ZendExtended\ (contains extended Zend classes specific to this application example: ZendExtended_Controller_Action extends zend_controller_Action ) applicationStorage\Applications\external\mobile\ (mobile external customer access application different workflow limited capabilities compared to full site version) applicationStorage\Applications\internal\ (application grouping folder) (contains all internal company applications) applicationStorage\Applications\internal\site\ (main internal application) applicationStorage\Applications\internal\mobile\ (mobile access has different flow and limited abilities compared to main site version) applicationStorage\Tests\ (contains PHP unit tests) applicationStorage\Library\ applicationStorage\Library\Service\ (contains all business logic, services and servicelocator; these are completely decoupled from Zend framework and rely on models' interfaces) applicationStorage\Library\Zend\ (Zend framework) applicationStorage\Library\Models\ (doesn't know services but is linked to Zend framework for DB operations; contains model interfaces and model datamappers for all business objects; examples include Iorder/IorderMapper, Iworksheet/IWorksheetMapper, Icustomer/IcustomerMapper) (Note: the Modules, Config, Layouts and ZendExtended folders are duplicated in each application folder; but i have omitted them as they are not required for my purposes.) For the library this contains all "universal" code. The Zend framework is at the heart of all applications, but I wanted my business logic to be Zend-framework-independent. All model and mapper interfaces have no public references to Zend_Db but actually wrap around it in private. So my hope is that in the future I will be able to rewrite the mappers and dbtables (containing a Models_DbTable_Abstract that extends Zend_Db_Table_Abstract) in order to decouple my business logic from the Zend framework if I want to move my business logic (services) to a non-Zend framework environment (maybe some other PHP framework). Using a serviceLocator and registering the required services within the bootstrap of each application, I can use different versions of the same service depending on the request and which application is being accessed. Example: all external applications will have a service_auth_External implementing service_auth_Interface registered. Same with internal aplications with Service_Auth_Internal implementing service_auth_Interface Service_Locator::getService('Auth'). I'm concerned I may be missing some possible problems with this. One I'm half-thinking about is a config.ini file for all externals, then a separate application config.ini overriding or adding to the global external config.ini. If anyone has any suggestions I would be greatly appreciative. I have used contextswitching for AJAX functions within the individual applications, but there is a big chance both external and internal will get web services created for them. Again, these will be separated due to authorization and different available services. \applicationstorage\Applications\internal\webservice \applicationstorage\Applications\external\webservice

    Read the article

  • HTML5 localStorage restrictions and limits

    - by Chuck
    HTML5's localStorage databases are usually size-limited — standard sizes are 5 or 10 MB per domain. Can these limits be circumvented by subdomains (e.g. example.com, hack1.example.com and hack2.example.com all have their own 5 MB databases)? And is there anything in the standard that specifies whether parent domains can access their children's databases? I can't find anything, and I can see arguments for doing it either way, but it seems like there has to be some standard model.

    Read the article

  • How can you add a UIGestureRecognizer to a UIBarButtonItem as in the common undo/redo UIPopoverContr

    - by SG
    Problem In my iPad app, I cannot attach a popover to a button bar item only after press-and-hold events. But this seems to be standard for undo/redo. How do other apps do this? Background I have an undo button (UIBarButtonSystemItemUndo) in the toolbar of my UIKit (iPad) app. When I press the undo button, it fires it's action which is undo:, and that executes correctly. However, the "standard UE convention" for undo/redo on iPad is that pressing undo executes an undo but pressing and holding the button reveals a popover controller where the user selected either "undo" or "redo" until the controller is dismissed. The normal way to attach a popover controller is with presentPopoverFromBarButtonItem:, and I can configure this easily enough. To get this to show only after press-and-hold we have to set a view to respond to "long press" gesture events as in this snippet: UILongPressGestureRecognizer *longPressOnUndoGesture = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(handleLongPressOnUndoGesture:)]; //Broken because there is no customView in a UIBarButtonSystemItemUndo item [self.undoButtonItem.customView addGestureRecognizer:longPressOnUndoGesture]; [longPressOnUndoGesture release]; With this, after a press-and-hold on the view the method handleLongPressOnUndoGesture: will get called, and within this method I will configure and display the popover for undo/redo. So far, so good. The problem with this is that there is no view to attach to. self.undoButtonItem is a UIButtonBarItem, not a view. Possible solutions 1) [The ideal] Attach the gesture recognizer to the button bar item. It is possible to attach a gesture recognizer to a view, but UIButtonBarItem is not a view. It does have a property for .customView, but that property is nil when the buttonbaritem is a standard system type (in this case it is). 2) Use another view. I could use the UIToolbar but that would require some weird hit-testing and be an all around hack, if even possible in the first place. There is no other alternative view to use that I can think of. 3) Use the customView property. Standard types like UIBarButtonSystemItemUndo have no customView (it is nil). Setting the customView will erase the standard contents which it needs to have. This would amount to re-implementing all the look and function of UIBarButtonSystemItemUndo, again if even possible to do. Question How can I attach a gesture recognizer to this "button"? More specifically, how can I implement the standard press-and-hold-to-show-redo-popover in an iPad app? Ideas? Thank you very much, especially if someone actually has this working in their app (I'm thinking of you, omni) and wants to share...

    Read the article

  • A layout for maven project with a patched dependency

    - by zamza
    Suppose, I have an opensource project that depends on some library, that must be patched in order to fix some issues. How do I do that? My ideas are: Have that library sources set up as a module, keep them in my vcs. Pros: simple. Cons: some third party sources in my repo, might slow down build process, hard to find a patched place (though can be fixed in README) Have a module, like in 1, but keep patched source files only, compile them with orignal library jar in classpath and somehow replace *.class files in library jar on build. Pros: builds faster, easy to find patched places. Cons: hard to configure, that jar hackery is non-obvious (library jar in repository and in my project assembly would be different) Keep patched *.class files in main/resources, and replace on packaging like in 2). Pros: almost none. Cons: binaries in vcs, hard to recompile a patched class as patch compilation is not automated. One nice solution is to create a distinct project with patched library sources, and deploy it on local/enterprise repository with -patched qualifier. But that would not fit for an opensourced project that is meant to be easily buildable by anyone who checks out its sources. Or should I just say "and also, before you build my project, please check out that stuff and run mvn install".

    Read the article

  • Error installing geoip_city gem

    - by tomeara
    I keep getting an error when trying to install the geoip_city gem. I've already installed the GeoIP C library to /opt/GeoIP, but the gem doesn't seem to pick it up. I've tried: sudo gem install geoip_city -- --with-geoip-dir=/opt/GeoIP sudo gem install geoip_city -- --with-geoip-lib=/opt/GeoIP/lib sudo gem install geoip_city -- --with-geoip-dir=/opt/GeoIP --with-geoip-lib=/opt/GeoIP/lib all of which output this error: Building native extensions. This could take a while... ERROR: Error installing geoip_city: ERROR: Failed to build gem native extension. /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb --with-geoip-lib=/opt/GeoIP/lib checking for GeoIP_record_by_ipnum() in -lGeoIP... no you must have geoip c library installed! *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby --with-geoip-dir --without-geoip-dir --with-geoip-include --without-geoip-include=${geoip-dir}/include --with-geoip-lib=${geoip-dir}/lib --with-GeoIPlib --without-GeoIPlib Gem files will remain installed in /Library/Ruby/Gems/1.8/gems/geoip_city-0.2.0 for inspection. Results logged to /Library/Ruby/Gems/1.8/gems/geoip_city-0.2.0/gem_make.out

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >