Search Results

Search found 64711 results on 2589 pages for 'core data'.

Page 714/2589 | < Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >

  • Monitoring Baseline

    - by Grant Fritchey
    Knowing what's happening on your servers is important, that's monitoring. Knowing what happened on your server is establishing a baseline. You need to do both. I really enjoyed this blog post by Ted Krueger (blog|twitter). It's not enough to know what happened in the last hour or yesterday, you need to compare today to last week, especially if you released software this weekend. You need to compare today to 30 days ago in order to begin to establish future projections. How your data has changed over 30 days is a great indicator how it's going to change for the next 30. No, it's not perfect, but predicting the future is not exactly a science, just ask your local weatherman. Red Gate's SQL Monitor can show you the last week, the last 30 days, the last year, or all data you've collected (if you choose to keep a year's worth of data or more, please have PLENTY of storage standing by). You have a lot of choice and control here over how much data you store. Here's the configuration window showing how you can set this up: This is for version 2.3 of SQL Monitor, so if you're running an older version, you might want to update. The key point is, a baseline simply represents a moment in time in your server. The ability to compare now to then is what you're looking for in order to really have a useful baseline as Ted lays out so well in his post.

    Read the article

  • Setting up Clojure Project And Sub Projects

    - by octopusgrabbus
    This is primarily a lein question about setting up a major project and its sub-projects, and is not intended to be a discussion question. Instead, I am interested in either a pointer to documentation or to a Clojure/lein best practices link. I have a municipal property assessments application that splits two master flies into different subset files, depending on whether a billing transfer is taking place or we want to batch update new accounts, rather than making our assessment department enter new accounts once in their system and then again in the tax collection system. My application is going to be large enough, that I can see a common library lein project with support functions, like splitting apart the files, and then individual lein projects that use the common library. Should the lein projects be set up at the same level and support included through the project.clj/core.clj files? Is there an advantage to creating lein new projects underneath a major project? Is there a problem with combing all functions in one project? I can probably make my one core.clj contain all flavors of the program, but coming from a C/C++ and Python background, I would prefer to have a lot of little projects.

    Read the article

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • How can I get nvidia-96 installed?

    - by Bob
    I'm at my wits end here. This is my last effort before I go back to Windows. I need to get nvidia-96 proprietary driver installed. Synaptic won't install it because it says it has dependencies. I installed every single dependency it listed except for "xorg-video-abi-10" which does not show up as an item that can be installed. I have no idea what to do. Using 11.10 with a NVIDIA Geforce 3 GPU. Anyone know how to get this dang driver installed? @fossfreedom: the opensource driver is extremely slow. So slow that the OS is unusable—words appear seconds after I type them—programs take forever to perform actions. Also it is causing my monitor to turn on and off for no reason. @yossile: synaptic shows that I have xserver-xorg-core installed. And xserver-xorg-core-udeb does not show up as something that can be installed. @papseddy: when I try to install the downloaded nvidia driver it says it won't work until I disable Nouveau kernel driver. I have tried everything to get this dang Nouveau kernel driver disabled. Nothing has been successful.

    Read the article

  • How to view/mount other partitions on your hard drive

    - by Preston Zacharias
    Recently I have installed Ubuntu 12.04 Beta 2 on a USB flash drive and decided to install it on an old external HDD which I have taken out of the casing and succesfully mounted in my desktop computer. There is no other operating system besides the newly install Ubuntu. However, there is about 500gb of data on the drive. This is why i used a partitioning software on my windows 7 netbook to partition the hard drive to set aside 1tb for files, 350gb of space for linux and the remaining 650gb for Vista which i plan on installing soon. But this is where the problem sets in...when installing Ubuntu it does not recognize that the drive is partitioned at all, it's just one big open block of space...so I used the installers built in partitioning feature to set aside 300gb for main Ubuntu install and 50gb for swap space. I set both of these partitions to be created at the "end" so that it wouldn't delete or write over my data. And this is where i am really lost; when booting into Ubuntu i am able to use it perfectly fine, got on internet, etc...but i have NO CLUE as to how i can view files that were previously on the drive (all of my data that i had prior to install). How can I mount/be able to view the other partition so that i can have access to my data? Thank you ahead of time! I REALLY appreciate any help or advice! ~Preston

    Read the article

  • "unresolvable problem" error when upgrading from 12.04 to 14.04

    - by flyingfisch
    So I have solved this issue, but now I have another problem: An unresolvable problem occurred while calculating the upgrade. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. I am not upgrading to a pre-release version of Ubuntu and I am not running a pre-release either. I have unchecked all my 3rd-party packages using Ubuntu Software Manager, EditSoftware Sources... What else might be wrong? UPDATE After doing sudo update-manager -d and sudo apt-get update;sudo apt-get dist-upgrade as per JimB's post, and then running sudo do-release-upgrade, here what I get: Err http://extras.ubuntu.com trusty/main Translation-en Err http://extras.ubuntu.com trusty/main Translation-en_US Err http://extras.ubuntu.com trusty/main Translation-en Ign http://extras.ubuntu.com trusty/main Translation-en_US Ign http://extras.ubuntu.com trusty/main Translation-en Fetched 0 B in 0s (0 B/s) Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done === Command detached from window (Mon Aug 18 23:53:10 2014) === === Command terminated with exit status 1 (Mon Aug 18 23:53:10 2014) ===

    Read the article

  • Android From Local DB (DAO) to Server sync (JSON) - Design issue

    - by Taiko
    I sync data between my local DB and a Server. I'm looking for the cleanest way to modelise all of this. I have a com.something.db package That contains a Data Helper and couple of DAO classes that represents objects stored in the db (I didn't write that part) com.something.db --public DataHelper --public Employee @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary etc... (all in all 50 fields) I have a com.something.sync package That contains all the implementation detail on how to send data to the server. It boils down to a ConnectionManager that is fed by different classes that implements a 'Request' interface com.something.sync --public interface ConnectionManager --package ConnectionManagerImpl --public interface Request --package LoginRequest --package GetEmployeesRequest My issue is, at some point in the sync process, I have to JSONise and de-JSONise my data (E.g. the Employee class). But I really don't feel like having the same Employee class be responsible for both his JSONisation and his actual representation inside the local database. It really doesn't feel right, because I carefully decoupled the rest, I am only stuck on this JSON thing. What should I do ? Should I write 3 Employee classes ? EmployeeDB @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary -etc... 50 fields EmployeeInterface -getName -getSalary -etc... 50 fields EmployeeJSON -JSON_KEY_NAME = "name" The JSON key happens to be the same as the table name, but it isn't requirement -name -JSON_KEY_SALARY = "salary" -salary -etc... 50 fields It feels like a lot of duplicates. Is there a common pattern I can use there ?

    Read the article

  • Wireless drivers

    - by Kencer
    The results for my laptop are as below. How will i install wireless drivers and graphics? $ lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1c.3 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 4 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 02:00.0 Network controller: Broadcom Corporation Device 4365 (rev 01) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 07) $ sudo lshw -c network *-network UNCLAIMED description: Network controller product: Broadcom Corporation vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f0500000-f0507fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 07 serial: 3c:97:0e:85:c0:0d size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl8168e-3_0.0.4 03/27/12 ip=172.16.96.36 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:43 ioport:2000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff $ rfkill list all 0: tpacpi_bluetooth_sw: Bluetooth Soft blocked: no Hard blocked: no

    Read the article

  • Hibernation to a swap file 12.04, fragfile output

    - by MrHug10
    I've been trying for some time now, to get hibernation working in Ubuntu 12.04 on my Dell XPS17. I dualboot Windows 7 and Ubuntu, each having their own partition and one shared partition for all my data and documents. As I would like to be able to swtich from ubuntu to windows without losing all the things I was currently doing in Ubuntu, I would like to be able to use hibernation. In order to achieve this I've followed the information at http://ubuntuforums.org/showthread.php?t=1042946. Only instead of creating my swap file on my linux partition (which is formatted: ext4), I've chosen to create one on my shared partition (which is formatted: ntfs). There is a problem with this though (at least, that's what I think the problem is), because when I call: sudo filefrag -v /media/data/Ubuntu_Swap_Space/6GiB.swap, I get the following output: Filesystem type is: 65735546 File size of /media/Data/Ubuntu_Swap_Space/6GiB.swap is 6442450944 (1572864 blocks, blocksize 4096) Discontinuity: Block 22 is at 25829097 (was 232498) /media/Data/Ubuntu_Swap_Space/6GiB.swap: 2 extents found So I'm not sure what I need to fill in as an offset to follow the rest of the earlier mentioned information. I've tried both the location of block 22 and the number that is listed after that, but when I then try sudo pm-hibernate nothing happens and this shows up in my /var/log/pm-suspend.log s2disk: Could not use the resume device (try swapon -a). Reason: No such device Hope someone can help me out with this! If you need more information about anything, please let me know!

    Read the article

  • Visual WebGui launches a new prize-winning challenge for developers

    - by Webgui
    Gizmox is announcing a ListView Challenge where developers can participate by creating and submitting their own implementations of the new extended ListView. "its quite amazing what you can do with it. It opens a lot of new ways to present data in a better and more userfriendly way," says one of the VWG community members who built a three level hierarchal ListView. Watch the hierarchal ListView demo by Visualizer Those ListView implementations will be reviewed and rated and the winner will win a free Professional Studio license $750 worth. The 5 top rated codes will entitle their developers for a cool new T-shirt. The new v6.4 introduces new capabilities with its extended ListView Control. Enter the Challenge The Collapsible Panel enhancement of the ListView Control, along with the Column Type Control, open up the possibilities for potential usage of the ListView control for data display, data entry and as the Collapsible Panel can contain whatever control you like, it can as well contain other ListView controls, thus making it possible to create Hierarchial ListView display of unlimited number of levels. The first enhancement is the introduction of a new column type Control which opens up the possibility for a ListView cell to contain controls like CheckBox, ComboBox, ListBox or even TabControl, Form or another ListView as the contents of that particular cell. This means that the ListView is no longer a display-only control, but has the full potential of being a full blown data entry control as well. The second major enhancement is the introduction of ListViewPanelItem. The ListViewPanelItem behaves exactly the same as it‘s predecessor, the ListViewItem, and in additon it has a Panel Control attached to it, seperate panel for each row in the ListView. This new Panel can be either expanded (visible) or not (hidden) and when expanded, will fill the full width of the ListView, but has adjustable height. Watch a webcast about the extended ListView

    Read the article

  • Defining formula through user interface in user form

    - by BriskLabs Pakistan
    I am a student and developing a simple assignment - windows form application in visual studio 2010. The application is suppose to construct formulas as per user requirement. The process: It has to pick data from columns of Microsoft Access database and the user should be able to pick the data by column name like we do in a drop down menu. and create reusable formulas in it ( configure it once and can change it again). followings are column titles from database that can be picked for example. e.g Col -1 : Marks in Maths Col -2 : Total Marks in Maths Col -3 : Marks in science Col -4 : Total marks in science Finally we should be able to construct any formula in the UI like (Col 1 + Col 3 ) / ( col 2 + col 4) = Formula 1 once this is formula is set saved and a name is assigned to it by user. he/she can use the formula and results shall appear in a window below. i.e He would be able to calculate his desired figures (formula) by only manipulating underlying data on the UI layer....choose the data for a period and apply the formula and get the answer Problem: It looks like I have to create an app where rules are set through UI....... this means no stored procedures are required in SQL.... please suggest the right approach.

    Read the article

  • Extension GLX missing...on a desktop PC

    - by Bart van Heukelom
    I just installed Ubuntu 12.10 on a new PC with an Nvidia GTX 560 graphics card, but after installing the Nvidia proprietary drivers (either -current or -current-updates), Unity won't start. When trying to start it manually I get the message "extension GLX missing". I've searched around and found results like this question which point out it's a problem with Nvidia Optimus laptops. However, I don't have this problem on a laptop, but on a desktop PC. lshw output for the graphics card: *-display description: VGA compatible controller product: GF114 [GeForce GTX 560 SE] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:16 memory:f4000000-f5ffffff memory:e0000000-e7ffffff memory:e8000000-ebffffff ioport:e000(size=128) memory:f6000000-f607ffff and CPU: *-cpu description: CPU product: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz vendor: Intel Corp. physical id: 40 bus info: cpu@0 version: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz slot: SOCKET 0 size: 1600MHz capacity: 3800MHz width: 64 bits clock: 100MHz capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms cpufreq configuration: cores=4 enabledcores=4 threads=4

    Read the article

  • Which pattern is best for large project

    - by shamim
    I have several years of software development experience, but I am not a keen and adroit programmer, to perform better I need helping hands. Recently I engaged in an ERP project. For this project want a very effective structure, which will be easily maintainable and have no compromise about performance issue. Below structures are now present in my old projects. Entity Layer BusinessLogic Layer. DataLogic Layer UI Layer. Bellow picture describe how they are internally connected. For my new project want to change my project structure, I want to follow below steps: Core Layer(common) BLL DAL Model UI Bellow picture describe how they are internally connected. Though goggling some initial type question’s are obscure to me, they are : For new project want to use Entity framework, is it a good idea? Will it increase my project performance? Will it more maintainable than previous structure? Entity Framework core disadvantages/benefits are? For my project need help to select best structure. Will my new structure be better than the old one?

    Read the article

  • SOA performance on SPARC T5 benchmark results

    - by JuergenKress
    The brand NEW super fast SPARC T5 servers are available. The platform is superb to run large SOA Suite environments or to consolidate your whole middleware platform. Some performance advices, recommended for all workloads: Performance profile for SOA apps on Oracle Solaris 11 BPEL (Fusion Order Demo) instances per second OSB (messages / transformations per second) Crypto acceleration study for SOA transformations SPARC T4 and T5 platform testing, pre-tuning Performance suitable for mid-to-high range enterprise in stand-alone SOA deployment or virtualized consolidation environment shared with Oracle applications 2.2x to 5x faster than SPARC T3 servers 25% faster SOA throughput, core to core than Intel 5600-series servers (running Exalogic software) SPARC T5 has 2x the consolidation density of Intel 5600-class processors 2x faster initial deployment time using Optimized Solutions pre-tested configuration steps Over 200 Application adapters for easiest Oracle software integration Would you like to get details? We can share with you on 1:1 bases T5 SOA Suite performance benchmarks, please contact your local partner manager or myself! SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: T5,TS Sparc,T5 SOA,bechmark,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to deal with a CEO making all technical decision but with little technical knowledge ?

    - by anonymous
    Hi, Question posted anonymously for obvious reasons. I am working in a company with a dev group of 5-6 developers, and I am in a situation which I have a hard time dealing with. Every technical choice (language, framework, database, database scheme, configuration scheme, etc...) is decided by the CEO, often without much rationale. It is very hard to modify those choices, and his main argument consists in "I don't like this", even though we propose several alternative with detailed pros/cons. He will also decide to rewrite from scratch our core product without giving a reason why, and he never participates to dev meetings because he considers it makes things slower... I am already looking at alternative job opportunities, but I was wondering if there anything we (the developers) could do to improve the situation. Two examples which shocked me: he will ask us to implement something akin to configuration management, but he reject any existing framework because they are not written in the language he likes (even though the implementation language is irrelevant). He also expects us to be able to write those systems in a couple of days, "because it is very simple". he keeps rewriting from scratch on his own our core product because the current codebase is too bad (codebase whose design was his). We are at our third rewrite in one year, each rewrite worse than the previous one. Things I have tried so far is doing elaborate benchmarks on our product (he keeps complaining that our software is too slow, and justifies rewrites to make it faster), implement solutions with existing products as working proof instead of just making pros/cons charts, etc... But still 90 % of those efforts go to the trashbox (never with any kind of rationale behind he does not like it, again), and often get reprimanded because I don't do exactly as he wants (not realizing that what he wants is impossible).

    Read the article

  • ArchBeat Link-o-Rama for October 23, 2013

    - by OTN ArchBeat
    Virtual Dev Day: Oracle ADF Development - Web, Mobile, and Beyond This free virtual event includes technical sessions that range from introductory to deep dive, covering Oracle ADF and Oracle ADF Mobile. Multiple tracks cover every interest and every level and include live online Q&A for answers to your technical questions. Register now! Americas: Tuesday, November 19, 9am-1pm PT / 12pm-4pm ET / 1pm-5pm BRT APAC: Thursday, November 21, 10am–1:30pm IST (India) / 12:30pm–4pm SGT (Singapore) / 3:30pm–7pm AESDT EMEA: Tuesday, November 26, 9am-1pm GMT / 1pm-5pm GST/ 2:30pm-6:30pm IST A Roadmap for SOA Development and Delivery | Mark Nelson Do you know the way to S-O-A? Mark Nelson does. His latest blog post, part of an ongoing series, will help to keep you from getting lost along the way. Updated ODI Statement of Direction | Robert Schweighardt Heads up Oracle Data Integrator fans! A new product statement of direction document is available, offering "an overview of the strategic product plans for Oracle’s data integration products for bulk data movement and transformation, specifically Oracle Data Integrator (ODI) and Oracle Warehouse Builder (OWB)." Java-Powered Robot Named NAO Wows Crowds | Tori Wieldt Java community manager Tori Wieldt interviews a robot and human. Nordic OTN Tour 2013 | Lonneke Dikmans Oracle ACE Director Lonneke Dikmans checks in from the Stockholm leg of the Nordic OTN Tour for 2013, sponsored by the Danish Oracle User Group and featuring fellow ACE Directors Tim Hall and Sten Vesterli, plus local speakers at various stops. Lonneke's post include the slides from three of the presentations. Thought for the Day "Some people approach every problem with an open mouth." — Adlai E. Stevenson23rd Vice President of the United States (October 23, 1835 – June 14, 1914) Source: brainyquote.com

    Read the article

  • Towards Ultra-Reusability for ADF - Adaptive Bindings

    - by Duncan Mills
    The task flow mechanism embodies one of the key value propositions of the ADF Framework, it's primary contribution being the componentization of your applications and implicitly the introduction of a re-use culture, particularly in large applications. However, what if we could do more? How could we make task flows even more re-usable than they are today? Well one great technique is to take advantage of a feature that is already present in the framework, a feature which I will call, for want of a better name, "adaptive bindings". What's an adaptive binding? well consider a simple use case.  I have several screens within my application which display tabular data which are all essentially identical, the only difference is that they happen to be based on different data collections (View Objects, Bean collections, whatever) , and have a different set of columns. Apart from that, however, they happen to be identical; same toolbar, same key functions and so on. So wouldn't it be nice if I could have a single parametrized task flow to represent that type of UI and reuse it? Hold on you say, great idea, however, to do that we'd run into problems. Each different collection that I want to display needs different entries in the pageDef file and: I want to continue to use the ADF Bindings mechanism rather than dropping back to passing the whole collection into the taskflow   If I do use bindings, there is no way I want to have to declare iterators and tree bindings for every possible collection that I might want the flow to handle  Ah, joy! I reply, no need to panic, you can just use adaptive bindings. Defining an Adaptive Binding  It's easiest to explain with a simple before and after use case.  Here's a basic pageDef definition for our familiar Departments table.  <executables> <iterator Binds="DepartmentsView1" DataControl="HRAppModuleDataControl" RangeSize="25"             id="DepartmentsView1Iterator"/> </executables> <bindings> <tree IterBinding="DepartmentsView1Iterator" id="DepartmentsView1">   <nodeDefinition DefName="oracle.demo.model.vo.DepartmentsView" Name="DepartmentsView10">     <AttrNames>       <Item Value="DepartmentId"/>         <Item Value="DepartmentName"/>         <Item Value="ManagerId"/>         <Item Value="LocationId"/>       </AttrNames>     </nodeDefinition> </tree> </bindings>  Here's the adaptive version: <executables> <iterator Binds="${pageFlowScope.voName}" DataControl="HRAppModuleDataControl" RangeSize="25"             id="TableSourceIterator"/> </executables> <bindings> <tree IterBinding="TableSourceIterator" id="GenericView"> <nodeDefinition Name="GenericViewNode"/> </tree> </bindings>  You'll notice three changes here.   Most importantly, you'll see that the hard-coded View Object name  that formally populated the iterator Binds attribute is gone and has been replaced by an expression (${pageFlowScope.voName}). This of course, is key, you can see that we can pass a parameter to the task flow, telling it exactly what VO to instantiate to populate this table! I've changed the IDs of the iterator and the tree binding, simply to reflect that they are now re-usable The tree binding itself has simplified and the node definition is now empty.  Now what this effectively means is that the #{node} map exposed through the tree binding will expose every attribute of the underlying iterator's collection - neat! (kudos to Eugene Fedorenko at this point who reminded me that this was even possible in his excellent "deep dive" session at OpenWorld  this year) Using the adaptive binding in the UI Now we have a parametrized  binding we have to make changes in the UI as well, first of all to reflect the new ID that we've assigned to the binding (of course) but also to change the column list from being a fixed known list to being a generic metadata driven set: <af:table value="#{bindings.GenericView.collectionModel}" rows="#{bindings.GenericView.rangeSize}"         fetchSize="#{bindings.GenericView.rangeSize}"           emptyText="#{bindings.GenericView.viewable ? 'No data to display.' : 'Access Denied.'}"           var="row" rowBandingInterval="0"           selectedRowKeys="#{bindings.GenericView.collectionModel.selectedRow}"           selectionListener="#{bindings.GenericView.collectionModel.makeCurrent}"           rowSelection="single" id="t1"> <af:forEach items="#{bindings.GenericView.attributeDefs}" var="def">   <af:column headerText="#{bindings.GenericView.labels[def.name]}" sortable="true"            sortProperty="#{def.name}" id="c1">     <af:outputText value="#{row[def.name]}" id="ot1"/>     </af:column>   </af:forEach> </af:table> Of course you are not constrained to a simple read only table here.  It's a normal tree binding and iterator that you are using behind the scenes so you can do all the usual things, but you can see the value of using ADFBC as the back end model as you have the rich pantheon of UI hints to use to derive things like labels (and validators and converters...)  One Final Twist  To finish on a high note I wanted to point out that you can take this even further and achieve the ultra-reusability I promised. Here's the new version of the pageDef iterator, see if you can notice the subtle change? <iterator Binds="{pageFlowScope.voName}"  DataControl="${pageFlowScope.dataControlName}" RangeSize="25"           id="TableSourceIterator"/>  Yes, as well as parametrizing the collection (VO) name, we can also parametrize the name of the data control. So your task flow can graduate from being re-usable within an application to being truly generic. So if you have some really common patterns within your app you can wrap them up and reuse then across multiple developments without having to dictate data control names, or connection names. This also demonstrates the importance of interacting with data only via the binding layer APIs. If you keep any code in the task flow generic in that way you can deal with data from multiple types of data controls, not just one flavour. Enjoy!

    Read the article

  • Code execution time out occationally

    - by Athul k Surendran
    I am working on an e-commerce website. There is a case where I need to fetch the whole data in database through third-party API and send it to an indexing engine. This third-party API has many functions like getproducts, getproductprice, etc., and each of that functions will return the data in XML format. From there I will take charge, I will use various API calls and will handle the XML data with XSLT. And will write to a CSV file. This file will be uploaded to an Indexing engine. Right now I have details of 8000 products to feed the engine, and almost all time the this process takes about 15 min to complete, and sometimes fails. I can't find a better solution for this. I am thinking about handling the XML data in C# itself and get rid of XSLT. As I think, XSLT is far slower than C#. Is it a good Idea? Or what else I can do to solve this issue?

    Read the article

  • How to write code that communicates with an accelerator in the real address space (real mode)?

    - by ysap
    This is a preliminary question for the issue, where I was asked to program a host-accelerator program on an embedded system we are building. The system is comprised of (among the standard peripherals) an ARM core and an accelerator processor. Both processors access the system bus via their bus interfaces, and share the same 32-bit global physical memory space. Both share access to the system's DRAM through the system bus. (The computer concept is similar to Beagleboard/raspberry Pie, but with a specialized accelerator added) The accelerator has its own internal memory (SRAM) which is exposed to the system and occupies a portion of the global address space (as opposed to how a graphics card would talk to teh CPU via a "small" aperture in the system memory space). On the ARM core (the host) we plan on running Ubuntu 12.04. The mode of operation of communicating between the processors should be that the host issues memory transactions on the system bus that are targeted at the accelerator internal memory. As far as my understanding goes, if I write a program for the host that simply writes to the physical address of the accelerator, most chances are that the program will crash due to a segmentation violation. So, I assume that I need some way of communicating with the device in real mode. What is the easiest way to achieve this mode of operation?

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • Lost WiFi after 12.10 upgrade

    - by Steven Guillory
    I received my new Dell Vostro 2420 last week, and just got around to upgrading from 11.10 to 12.10. Unfortunately, like many others (after researching the issue), I no longer have WiFi. I have tried every sudo command given that worked for others, and still can't get my wireless to function. I am new to Linux, so any and all help is appreciated. Thanks in advance! Edit: I can connect via ethernet, just not via wifi. As a matter of fact, when I use Fn + F2 to turn on wifi, only my bluetooth comes on. lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.3 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 4 (rev c4) 00:1c.5 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 6 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 07:00.0 Network controller: Broadcom Corporation Device 4365 (rev 01) 09:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 07) This is what I am getting... dpkg: error: --install needs at least one package archive file argument Type dpkg --help for help about installing and deinstalling packages [*]; Use dselect or aptitude for user-friendly package management; Type dpkg -Dhelp for a list of dpkg debug flag values; Type dpkg --force-help for a list of forcing options; Type dpkg-deb --help for help about manipulating *.deb files; Options marked [*] produce a lot of output - pipe it through less or more !

    Read the article

  • Java enum pairs / "subenum" or what exactly?

    - by vemalsar
    I have an RPG-style Item class and I stored the type of the item in enum (itemType.sword). I want to store subtype too (itemSubtype.long), but I want to express the relation between two data type (sword can be long, short etc. but shield can't be long or short, only round, tower etc). I know this is wrong source code but similar what I want: enum type { sword; } //not valid code! enum swordSubtype extends type.sword { short, long } Question: How can I define this connection between two data type (or more exactly: two value of the data types), what is the most simple and standard way? Array-like data with all valid (itemType,itemSubtype) enum pairs or (itemType,itemSubtype[]) so more subtype for one type, it would be the best. OK but how can I construct this simplest way? Special enum with "subenum" set or second level enum or anything else if it does exists 2 dimensional "canBePairs" array, itemType and itemSubtype dimensions with all type and subtype and boolean elements, "true" means itemType (first dimension) and itemSubtype (second dimension) are okay, "false" means not okay Other better idea Thank you very much!

    Read the article

  • Setting up multiple cores for apache solr for Ubuntu 12.04 and Drupal 7

    - by chrisjlee
    I'm setting up solr locally for my development purposes and integration with Drupal 7. I'm not very familiar with tomcat. My background has primarily been LAMP setups. So I went and installed the package provided by ubuntu for apache solr following this guide. sudo apt-get install tomcat6 tomcat6-admin tomcat6-common tomcat6-user tomcat6-docs tomcat6-examples sudo apt-get install solr-tomcat I've got that working. The apt-get package manager does a great job and allows me to setup solr but with one core. What steps need to be taken to enable multi core setup for apache solr? And below is my solr.xml file: sudo nano /var/lib/tomcat6/conf/Catalina/localhost/solr.xml <!-- Context configuration file for the Solr Web App --> <Context path="/solr" docBase="/usr/share/solr" debug="0" privileged="true" allowLinking="true" crossContext="true"> <!-- make symlinks work in Tomcat --> <Resources className="org.apache.naming.resources.FileDirContext" allowLinking="true" /> <Environment name="solr/home" type="java.lang.String" value="/usr/share/solr" override="true" /> </Context>

    Read the article

  • Two Upcoming Server Virtualization Webcasts

    - by Chris Kawalek
    We have a couple of interesting server virtualization webcasts coming up that you might be interested in. Have a look:  Webcast 1: October 23rd, 9 am PST Virtualized Infrastructure Simplified with Oracle VM and NetApp Storage and Data Management Solutions Point-and-Click Interface Deploys Virtualized Data Infrastructure in Minutes  Provisioning and deploying a virtual data infrastructure can be costly, time-consuming, and prone to error. Oracle VM and NetApp joint solutions, however, give you a single point-and-click interface to deploy your virtualized data infrastructure seamlessly in minutes. Join us in this live webcast to learn more from product experts and view a product demo. Register (for free!) here.  Webcast 2: November 7th, 9 am PST Report Shows Oracle VM Up to 10x Faster than VMware vSphere 5 in Time to Deployment Time is your IT department’s greatest commodity. So when a new report reveals that your IT staff can deploy Oracle Real Application Clusters (Oracle RAC) up to 10 times faster than a traditional install performed with VMware vSphere 5, it’s newsworthy. Join us in this live webcast to learn how you can realize your time savings. Register (for free!) here. 

    Read the article

  • I have an "amoeba" game mechanic. Any idea on how to implement it?

    - by Jason
    Outside of a tetris clone, a crappy 2D top-down shooter, and some messing around with stuff like Unity and Flixel, I realize that I have yet to complete a single, polished, bells-and-whistles game. I want to change this, and I have an idea for my next project. The idea is that you're an amoeba. Amoebas have these eye-like cores (or something like that, I don't know biology), and you have two of 'em. You control one with WASD and the other with IJKL. There has to be a constant radius of stuff around each of the cores: And the area of the amoeba has to stay constant. So if you move a core in one direction, you increase the amoeba's area, but that increase is compensated by a decrease somewhere else: Aaaaaand I'd like to implement a vagination mechanic. You absorb things by engulfing them, like a boss. Maybe even an extra core, or a needle that pops you and causes all your inner stuff to start gushing out: But here's the problem: I don't know how to make this. However, I would like some ideas on how to implement it. Should I explore physics libraries like Box2D? Or maybe something involving fluid physics? Any help would be much appreciated. P.S. Feel free to steal this idea. I have plenty of ideas. If you do, please tell me how you made it so I can try it myself.

    Read the article

< Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >