Search Results

Search found 40310 results on 1613 pages for 'two factor'.

Page 196/1613 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Technology Choice for a Client Application [on hold]

    - by AK_
    Not sure this is the right place to ask... I'm involved in the development of a new system, and now we are passing the demos stage. We need to build a proper client application. The platform we care most about is Windows, for now at least, but we would love to support other platforms, as long as it's free :-). Or at least very cheap. We anticipate two kinds of users: Occasional, coming mostly from the web. Professional, who would probably require more features, and better performance, and probably would prefer to see a native client. Our server exposes two APIs: A SOAP API, WCF behind the scenes, that supports 100% of the functionality. A small and very fast UDP + Binary API, that duplicates some of the functionality and is intended for the sake of performance for certain real-time scenarios. Our team is mostly proficient in .Net, C#, C++ development, and rather familiar with Web development (HTML, JavaScript). We are probably intending to develop two clients (for both user profiles), a web app, and a native app. Architecturally, we would like to have as many common components as possible. We would like to have several layers: Communication, Client Model, Client Logic, shared by both of the clients. We would also like to be able to add features to both clients when only the actual UI is a dual cost, and the rest is shared. We are looking at several technologies: WPF + Silverlight, Pure HTML, Flash / Flex (AIR?), Java (JavaFx?), and we are considering poking at WinRT(or whatever the proper name is). The question is which technology would you recommend and why? And which advantages or disadvantages will it have regarding our requirements?

    Read the article

  • PHP - Making CMS (architecture, etc.)

    - by UnknownProgramer
    I'm in the stage of planning new CMS. Before I used WordPress and other open source CMS for my clients, but I always had to write new modules and even mess with the code in order to do certain things. Which as you understand is not the best thing to do. So I finally decided to make my own CMS to work with, the way I need. But before I start it, I would like to think it trough carefully to ensure that I won't need to rewrite it ground up, just because I forgot to include some feature into architecture or did it wrong. I would like to hear your thoughs and the most important I would like you to suggest me some articles or books on that subject, especially on architecture of such systems. I googled a few good books, but that is not enough. The way I'm planning to do it: PHP5, completely OOP, modules architecture. You make a page and add any modules you need there, but modules are not global, but local to a page so you can make two pages with the same module, but content will be different if you set different "content ID" for these two entities. But it can be set the same, so two pages has the same content of the modules put there. Also I plan to support online storage web service (like amazon S3) for images and files, so I would like to hear your thoughs on it too. Also I have not yet decided how to store language data. I don't want to use DB for that, but I haven't decided yet. Also I think I will support other DB with global DB class and separate DB wrappers for MySQL and other databases. And, well, I would appreciate any other information you can provide for that subject.

    Read the article

  • One-week release cycle: how do I make this feasible?

    - by Arkaaito
    At my company (3-yr-old web industry startup), we have frequent problems with the product team saying "aaaah this is a crisis patch it now!" (doesn't everybody?) This has an impact on the productivity (and morale) of engineering staff, self included. Management has spent some time thinking about how to reduce the frequency of these same-day requests and has come up with the solution that we are going to have a release every week. (Previously we'd been doing one every two weeks, which usually slipped by a couple of days or so.) There are 13 developers and 6 local / 9 offshore testers; the theory is that only 4 developers (and all testers) will work on even-numbered releases, unless a piece of work comes up that really requires some specific expertise from one of the other devs. Each cycle will contain two days of dev work and two days of QA work (plus 1 day of scoping / triage / ...). My questions are: (a) Does anyone have experience with this length of release cycle? (b) Has anyone heard of this length of release cycle even being attempted? (c) If (a) or (b), how on Earth do you make it work? (Any pitfalls to avoid, etc., are also appreciated.) (d) How can we minimize the damage if this effort fails?

    Read the article

  • Unity3d web player fails to load textures

    - by José Franco
    I'm having a problem with Unity3d Web Player. I have developed a project and succesfully deployed it in a web app. It works with absolutely no problem on my PC. This app is to be installed on two identical machines. I have installed them in both and it only works properly in one. The issue I have is on a computer it fails to properly load the models and textures, so the game runs but instead of the models I can only see black rectangles on a blue background. It has the same problem with all browsers and I get no errors either by the player or by JavaScript. The only difference between these computers is that one that has the problem is running on Windows 8.1 and the other one on Windows 8 only. Could this be the cause of the issue? It works fine on my computer with Windows 8.1. However both of the other computers have specs that are significantly lower than mine. I have already searched everywhere and it seems that it has to do with the individual games, however I think it may have to do with the computer itself because it runs properly in the other two. The specs on the computes I'm installing the app on are as follows: Intel Celeron 1.40 GHz, 2GB RAM, Intel HD Graphics If anybody could point me in the right direction I would be very grateful I forgot to mention, I'm running Unity Web player 4.3.5 and the version on the other two computers is 4.5.0

    Read the article

  • Videos: Getting Started with Java Embedded

    - by Tori Wieldt
    Are you a Java developer? That means you can write applications for embedded processors! There are new six new videos up on the YouTube/Java channel that you can watch to get more information. To get an overview, watch James Allen of Oracle Global Business Development give OTN a tour of the Oracle booth at ARM Techcon. He also explains the huge opportunity for Java in the embedded space. These videos from Oracle Engineering show you how to leverage your knowledge to seamlessly develop in a space that is really taking off. Java SE Embedded Development Made Easy, Part 1 This video demonstrates how developers already familiar with the Java SE development paradigm can leverage their knowledge to seamlessly develop on very capable embedded processors. Part one of a two-part series. Java SE Embedded Development Made Easy, Part 2 This video demonstrates how developers already familiar with the Java SE development paradigm can leverage their knowledge to seamlessly develop on very capable embedded processors. Part two of a two-part series. Mobile Database Synchronization - Healthcare Demonstration This video demonstrates how a good portion of Oracle's embedded technologies (Java SE-Embedded, Berkeley DB, Database Mobile Server) can be applied to a medical application. Tomcat Micro Cluster See how multiple embedded devices installed with Java Standard Edition HotSpot for Armv5/Linux and Apache Tomcat can be configured as a micro cluster. Java Embedded Partnerships Kevin Smith of Oracle Technical Business Development explains what's new for partners and Java developers in the embedded space. Learn how you can start prototyping for Qualcomm's new Orion board before it's available. (Sorry about the video quality, the booth lights were weird.)   Visit the YouTube/Java channel for other great Java videos. <fade to black>

    Read the article

  • Welcome to the newly merged JCP EC!

    - by Heather VanCura
    As part of the JCP.Next effort, the second JSR as part of the JCP program reforms, JSR 355, Executive Committee (EC) Merge, will take effect on Tuesday as JCP 2.9. The first in the effort was JSR 348, which took effect as JCP 2.8 in October 2011. EC members guide the evolution of the Java technologies by approving and voting on all technology proposals (Java Specification Requests, or JSRs). They are also responsible for defining the JCP's rules of governance and the legal agreement between members and the organization. They provide guidance to the Program Management Office (PMO) and they represent the interests of the JCP to the broader community. Starting on Tuesday, 13 November, JCP 2.9 is in effect, and the EC is merged from two ECs -- one representing Java SE/EE and one representing Java ME -- to one merged EC. IBM and Oracle each gave up one of their two seats (one per EC) and the terms expired for four members who did not run for re-election: AT&T, Deutsch Telekom, Siemens and Vodafone. All four remain JCP members. In addition, the seat occupied by RIM was forfeited due to lack of participation in October 2012. The JCP values the organizations and representatives for their contribution to the JCP EC, and looks forward to their continued participation in the JCP Program. The complete listing of the EC, 24 members total at the moment, is now available. We asked the two newcomers to the EC, Cinterion and CloudBees, and the re-elected London Java Community, to comment on their plans for their term in the EC. Read about their plans in the article published on JCP.org, "JCP 2.9 with a Merged EC Takes Effect 13 November". Also, plan to attend the public (open to all community members) EC Meeting planned for 20 November at 15:00 PST.  Details will be posted here and on the JCP.org home page next week.

    Read the article

  • Compute the AES-encryption key given the plaintext and its ciphertext?

    - by Null Pointers etc.
    I'm tasked with creating database tables in Oracle which contain encrypted strings (i.e., the columns are RAW). The strings are encrypted by the application (using AES, 128-bit key) and stored in Oracle, then later retrieved from Oracle and decrypted (i.e., Oracle itself never sees the unencrypted strings). I've come across this one column that will be one of two strings. I'm worried that someone will notice and presumably figure out what those two values to figure out the AES key. For example, if someone sees that the column is either Ciphertext #1 or #2: Ciphertext #1: BF,4F,8B,FE, 60,D8,33,56, 1B,F2,35,72, 49,20,DE,C6. Ciphertext #2: BC,E8,54,BD, F4,B3,36,3B, DD,70,76,45, 29,28,50,07. and knows the corresponding Plaintexts: Plaintext #1 ("Detroit"): 44,00,65,00, 74,00,72,00, 6F,00,69,00, 74,00,00,00. Plaintext #2 ("Chicago"): 43,00,68,00, 69,00,63,00, 61,00,67,00, 6F,00,00,00. can he deduce that the encryption key is "Buffalo"? 42,00,75,00, 66,00,66,00, 61,00,6C,00, 6F,00,00,00. I'm thinking that there should be only one 128-bit key that could convert Plaintext #1 to Ciphertext #1. Does this mean I should go to a 192-bit or 256-bit key instead, or find some other solution? (As an aside, here are two other ciphertexts for the same plaintexts but with a different key.) Ciphertext #1 A ("Detroit"): E4,28,29,E3, 6E,C2,64,FA, A1,F4,F4,96, FC,18,4A,C5. Ciphertext #2 A ("Chicago"): EA,87,30,F0, AC,44,5D,ED, FD,EB,A8,79, 83,59,53,B7.

    Read the article

  • Handling Types for Real and Complex Matrices in a BLAS Wrapper

    - by mga
    I come from a C background and I'm now learning OOP with C++. As an exercise (so please don't just say "this already exists"), I want to implement a wrapper for BLAS that will let the user write matrix algebra in an intuitive way (e.g. similar to MATLAB) e.g.: A = B*C*D.Inverse() + E.Transpose(); My problem is how to go about dealing with real (R) and complex (C) matrices, because of C++'s "curse" of letting you do the same thing in N different ways. I do have a clear idea of what it should look like to the user: s/he should be able to define the two separately, but operations would return a type depending on the types of the operands (R*R = R, C*C = C, R*C = C*R = C). Additionally R can be cast into C and vice versa (just by setting the imaginary parts to 0). I have considered the following options: As a real number is a special case of a complex number, inherit CMatrix from RMatrix. I quickly dismissed this as the two would have to return different types for the same getter function. Inherit RMatrix and CMatrix from Matrix. However, I can't really think of any common code that would go into Matrix (because of the different return types). Templates. Declare Matrix<T> and declare the getter function as T Get(int i, int j), and operator functions as Matrix *(Matrix RHS). Then specialize Matrix<double> and Matrix<complex>, and overload the functions. Then I couldn't really see what I would gain with templates, so why not just define RMatrix and CMatrix separately from each other, and then overload functions as necessary? Although this last option makes sense to me, there's an annoying voice inside my head saying this is not elegant, because the two are clearly related. Perhaps I'm missing an appropriate design pattern? So I guess what I'm looking for is either absolution for doing this, or advice on how to do better.

    Read the article

  • Virtual Pageview Goal Funnel Not Tracking Correctly

    - by cphill
    I have an AJAX form that has three stages: 1. The landing page where a user fills out a form and selects between three question sets and clicks begin assessment 2. The assessment page, where users fill out questions relating to the question set that they selected on the landing page. 3.The results page, which shows whether they are at High Risk or Low Risk. Since this is an AJAX form that does not open a new page for each step of the process, I implemented a virtual pageview that would fire on the pageload of each step of the form process. The following is my virtual pageview setup for each stage: /form/begin-assessment /form/assessment/* (* = Three different virtual pageviews depending on the users selection of the three sets of questions: /one, /two, /three) 3./form/finished-assessment I have set up three separate goals to track user progress through each step of the form assessment. Here is my Goal setup: Goal Description: -Goal Type: Destination Goal Details: -Destination: /form/finished-assessment -Funnel: On Step 1: /form/begin-assessment (Required: Yes) Step 2: /form/assessment/one (Step 2: replace /one with /two or /three and you have my two other goals setup) Now my goals are recording the correct data in the first step and show the completions in the destination, but the second step does not show any drop offs. They show the same data as the destination. Any ideas of how I set up the goals wrong?

    Read the article

  • How can I make multiple displays work on my Asus UX32VD?

    - by oKtosiTe
    Original title: Why do I have two trash icons in the Unity Launcher? Whether I run Ubuntu as a live-USB or install it, I always have two trash bins on the Unity Launcher. Both work, and both open the same location. This seems a bit redundant; what could be done about it? Update: Turning auto-hide on made it obvious that I have multiple Launchers showing. With auto-hide off, they simply overlap, making it look like there's a double trash icon, but with auto-hide enabled, I can display one Launcher (and therefore one trash icon) at a time. Still, two are running simultaneously. Second update: This problem appears to be caused by the way Ubuntu handles multiple displays on my Asus UX32VD Ultrabook. Somehow, the laptop display cannot be used while my external display is connected. It is shown in the Displays list, but remains black no matter how I configure it. The external display runs at 1920x1200, the laptop monitor should run at 1920x1080. It therefore becomes obvious that the Launcher that's supposed to run on the laptop display, is actually displayed on the external monitor. Using nomodeset as a kernel parameter as indicated here makes the laptop display inaccessible altogether, detecting the external monitor as the laptop display and making resolutions other than 1920x1200 inaccessible. That is not an option.

    Read the article

  • Languages on a resume: Is it better to put "C/C++" or "C, C++"?

    - by Kevin
    I'm graduating in a couple of weeks, and my resume (as expected) lists the languages that I've had experience with. Previously I've put "C/C++", however back then I didn't have that much experience with these two languages as I do now. Now that I've formally learned these two languages, it has become evident to me (and anyone who really knows these languages) that they are similar, and completely disimilar at the same time. Sure, most C code is compilable C++ code, but syntax and incorporation of library functions is pretty much where these similarities end. In most non-trivial problems, chances are that the desirable C++ solution will be different from the desirable C solution. My question: Will recruiters take note or care about whether you put "C/C++" as opposed to "C, C++"? Will they assume a lack of knowledge of the workings of either because of the inclusion of the first form, or perhaps see the inclusion of the second form as a potential "resume beefer" (listing them as 2 languages, instead of "one")? Furthermore, for jobs that you've applied to that were particularly interested in these two langauges, did the interview process include questions about the differences between C programming and C++ programming (so, about actual programming techniques, not only the extra paradigms in the latter)?

    Read the article

  • Implementing the NetBeans Project API on Maven in IntelliJ IDEA

    - by Geertjan
    James McGivern, one of the speakers I met at JAX London, is creating media software on the NetBeans Platform. However, he's using Maven and IntelliJ IDEA and one of the features he needs is project support, i.e., the project infrastructure that's part of NetBeans IDE. The two documents that describe the NetBeans Project API are these: http://platform.netbeans.org/tutorials/nbm-projecttype.html http://netbeans.dzone.com/how-create-maven-nb-project-type By combining the above two, you'll understand how to create a project infrastructure on top of the NetBeans Platform with Maven. However, an additional step of complexity is added when IntelliJ IDEA is included into the mix and therefore I created the following screencast which, in 15 minutes, puts all the pieces together. Be aware that I'm probably not using IntelliJ IDEA and Maven as optimally as I could and I'm publishing this at least partly so that the errors of my ways can be pointed out to me. But, first and foremost, this is especially for you James:  Note: Intentionally no sound, only callouts explaining what I'm doing. You'll probably need to pause the movie here and there to absorb the text; for details on the text, see the two links referred to above.

    Read the article

  • Object construction design

    - by James
    I recently started to use c# to interface with a database, and there was one part of the process that appeared odd to me. When creating a SqlCommand, the method I was lead to took the form: SqlCommand myCommand = new SqlCommand("Command String", myConnection); Coming from a Java background, I was expecting something more similar to SqlCommand myCommand = myConnection.createCommand("Command String"); I am asking, in terms of design, what is the difference between the two? The phrase "single responsibility" has been used to suggest that a connection should not be responsible for creating SqlCommands, but I would also say that, in my mind, the difference between the two is partly a mental one of the difference between a connection executing a command and a command acting on a connection, the latter of which seems less like what I have been lead to believe OOP should be. There is also a part of me wondering if the two should be completely separate, and should only come together in some sort of connection.execute(command) method. Can anyone help clear up these differences? Are any of these methods "more correct" than the others from an OO point of view? (P.S. the fact that c# is used is completely irrelevant. It just highlighted to me that different approaches were used)

    Read the article

  • Views : ViewControllers, many to one, or one to one?

    - by conor
    I have developed an Android application where, typically, each view (layout.xml) displayed on the screen has it's own corresponding fragment (for the purpose of this question I may refer to this as a ViewController). These views and Fragments/ViewControllers are appropriately named to reflect what they display. So this has the effect of allowing the programmer to easily pinpoint the files associated with what they see on any given screen. The above refers to the one to one part of my question. Please note that with the above there are a few exceptions where very similar is displayed on two views so the ViewController is used for two views. (Using a simple switch (type) to determine what layout.xml file to load) On the flip side. I am currently working on the iOS version of the same app, which I didn't develop. It seems that they are adopting more of a one-to-many (ViewController:View) approach. There appears to be one ViewController that handles the display logic for many different types of views. In the ViewController are an assortment of boolean flags and arrays of data (to be displayed) that are used to determine what view to load and how to display it. This seems very cumbersome to me and coupled with no comments/ambiguous variable names I am finding it very difficult to implement changes into the project. What do you guys think of the two approaches? Which one would you prefer? I'm really considering putting in some extra time at work to refactor the iOS into a more 1:1 oriented approach. My reasoning for 1:1 over M:1 is that of modularity and legibility. After all, don't some people measure the quality of code based on how easy it is for another developer to pick up the reigns or how easy it is to pull a piece of code and use it somewhere else?

    Read the article

  • How to improve my backup strategy (rsync)?

    - by GUI Junkie
    I've seen the QAs about backup solutions, but I'm asking anyway. One because it's a personal situation I haven't solved yet, and two, because the answer can be useful for others. My situation is rather simple. I have two computers with two users and one external hard-drive. I want to sync/backup a shared directory. Currently I use rsync with the -azvu options to sync to the external drive. My problem is the round-trip. All deleted files are restored! Using rsync I'm doing Computer A --> External disk --> Computer A Computer B --> External disk --> Computer B (I should probably do External disk -- Computer A as a last step) I've seen 'bup' mentioned and other QA talk about dropbox + rsync... Another option is maybe to delete files from rsync? Can my running backup strategy be improved in some other way?

    Read the article

  • If you develop on multiple operating systems, is it better to have multiple computers + displays?

    - by dan
    I develop for iOS and Linux. My preferred OS is Ubuntu. Now my software shop (me and a partner) is developing for Windows too. Now the question is, is it more efficient to have multiple workstations, one for each target OS? Efficiency and productivity is a higher priority than saving money. I have a 3.4Ghz i7 desktop workstation running Ubuntu and virtualized Windows with two displays, and I'm putting together an even more powerful i7 Hackintosh with 16GB RAM (to replace my weak 2.2Ghz i5 Macbook Pro). My specific dilemma is whether I should sell the first computer and triple boot on the second one, or buy two more displays and run both desktop systems simultaneously. Would appreciate answers from developers who write software for multiple OSes. Running guest OSes in VirtualBox on one system not ideal, because in my experience performance is seriously degraded under virtualization. So the choice is between dual/triple booting on one system vs having two systems, one for OSX+iOS/Windows (dual boot) and the other for Ubuntu (which I prefer to use as my main OS). For much of our work, I write a server-side application in Linux and a client for iOS (or for Windows or OS X) simultaneously.

    Read the article

  • Is learning how to use C (or C++) a requirement in order to be a good (excellent) programmer?

    - by blueberryfields
    When I first started to learn how to program, real programmers could write assembly in their sleep. Any serious schooling in computer science would include a hefty bit of training and practice in programming using assembly. That has since changed, to the point where I see Computer Science degrees with assembly, if included at all, is relegated to one assignment, and one chapter, for a total of two weeks' work out of 4 years' schooling. C/C++ programming seems to have followed a similar path. I'm no longer surprised to interview university graduates who have not spent more than two weeks programming in C++, and have only read of C in a book somewhere. While the most serious CS degrees still seem to include significant time learning and using one or both of the languages, the trend is clearly towards less enforced C/C++ in school. It's clearly possible to make a career producing good work without ever reading or writing a single line of C or C++ code. Given all of that, is learning the two languages worth the effort? Are they at all required to excel? (beyond the obvious, non-language specific advice, such as "a good selection of languages is probably important for a comprehensive education", and "it's probably a good idea to keep trying out and learning new languages throughout a programmers' career, just to stretch the gray cells")

    Read the article

  • Which approach is the most maintainable?

    - by 2rs2ts
    When creating a product which will inherently suffer from regression due to OS updates, which of these is the preferable approach when trying to reduce maintenance cost and the likelihood of needing refactoring, when considering the task of interpreting system state and settings for a lay user? Delegate the responsibility of interpreting the results of inspecting the system to the modules which perform these tasks, or, Separate the concerns of interpretation and inspection into two modules? The first obviously creates a blob in which a lot of code would be verbose, redundant, and hard to grok; the second creates a strong coupling in which the interpretation module essentially has to know what it expects from inspection routines and will have to adapt to changes to the OS just as much as the inspection will. I would normally choose the second option for the separation of concerns, foreseeing the possibility that inspection routines could be re-used, but a developer updating the product to deal with a new OS feature or something would have to not only write an inspection routine but also write an interpretation routine and link the two correctly - and it gets worse for a developer who has to change which inspection routines are used to get a certain system setting, or worse yet, has to fix an inspection routine which broke after an OS patch. I wonder, is it better to have to patch one package a lot or two packages, each somewhat less so?

    Read the article

  • Is using a dedicated thread just for sending gpu commands a good idea?

    - by tigrou
    The most basic game loop is like this : while(1) { update(); draw(); swapbuffers(); } This is very simple but have a problem : some drawing commands can be blocking and cpu will wait while he could do other things (like processing next update() call). Another possible solution i have in mind would be to use two threads : one for updating and preparing commands to be sent to gpu, and one for sending these commands to the gpu : //first thread while(1) { update(); render(); // use gamestate to generate all needed triangles and commands for gpu // put them in a buffer, no command is send to gpu // two buffers will be used, see below pulse(); //signal the other thread data is ready } //second thread while(1) { wait(); // wait for second thread for data to come send_data_togpu(); // send prepared commands from buffer to graphic card swapbuffers(); } also : two buffers would be used, so one buffer could be filled with gpu commands while the other would be processed by gpu. Do you thing such a solution would be effective ? What would be advantages and disadvantages of such a solution (especially against a simpler solution (eg : single threaded with triple buffering enabled) ?

    Read the article

  • Dell Upgrade to 12.04 LTS No wifi, graphic card driver and bluetooth Problem?

    - by Mattlinux1
    (Dell inspiron m5040 Upgrade to 12.04 LTS from 11.10 wifi wont work and has two bluetooth icons?) Old title of post! No Wifi/Bluetooth Problem. "Both Fixed", see below comments for bluetooth fix and see bottom answer for wifi fix. My online upgrade of 12.04 LTS, was installed to my laptop the Dell inspiron m5040, when it was done i found that my wifi did not even get picked up anymore and i now have two bluetooth icons at the top?? So what i have been doing for now is at the boot screen. I hit the use previous linux version, this works fine but also has the two bluetooth at the top. Is very thankful for any answers for this fix, Thank You. Ok bluetooth problem solved by – fossfreedom answer by deleting the other bluetooth program running in software center. Now the other problem is Wifi: just have to restart system as it works in previous linux version boot opt, but on the first boot screen opt run ubuntu it will not pick up wifi drivers. Broadcom 802.11 Linux STA wireless driverfor use with Broadcom's BCM4311-, BCM4312-, BCM4313-, BCM4321-,BCM4322-, BCM43224-, and BCM43225-, BCM43227- and BCM43228-basedhardware. And 3D-accelerated proprietary graphics driver for ATI cards. This driver is required to fully utilise the 3D potential of some ATI graphics cards, as well as provide 2D acceleration of newer cards. will not instill is there anyway for me to make a copy of the drivers via usb HDD and then put them on new upgraded version.

    Read the article

  • Java enum pairs / "subenum" or what exactly?

    - by vemalsar
    I have an RPG-style Item class and I stored the type of the item in enum (itemType.sword). I want to store subtype too (itemSubtype.long), but I want to express the relation between two data type (sword can be long, short etc. but shield can't be long or short, only round, tower etc). I know this is wrong source code but similar what I want: enum type { sword; } //not valid code! enum swordSubtype extends type.sword { short, long } Question: How can I define this connection between two data type (or more exactly: two value of the data types), what is the most simple and standard way? Array-like data with all valid (itemType,itemSubtype) enum pairs or (itemType,itemSubtype[]) so more subtype for one type, it would be the best. OK but how can I construct this simplest way? Special enum with "subenum" set or second level enum or anything else if it does exists 2 dimensional "canBePairs" array, itemType and itemSubtype dimensions with all type and subtype and boolean elements, "true" means itemType (first dimension) and itemSubtype (second dimension) are okay, "false" means not okay Other better idea Thank you very much!

    Read the article

  • alsa - sound issues on ubuntu 12.04

    - by tam_ubuuser
    i am having an sony E series laptop.i have an HDMI port .at this stage ,i have tested my sound card , which provides audio out on my laptop i.e i could hear songs .my laptop has two sound cards amd 5450 and an intel-hda(alsamixer shows that as s/pdif) . i decided to connect HDMI output to my new HD-TV.but, i could get only visuals on my TV,NO AUDIO OUTPUT ( HDMI cable works fine with win 7).my laptop has two sound cards.but i couldn't switch output to other card.( i don't know ,how to do that) i decided to update alsa. complied the following code in terminal. sudo apt-add-repository ppa:ubuntu-audio-dev/alsa-daily sudo apt-get update sudo apt-get install alsa-hda-dkms then,strangely no login sound, and no audio output on my laptop at all .then, started complied code from step1 sound troubleshooting procedure from offical ubuntu site.then, my speaker icon taskbar disappeared .obivously $aplay -l ,provided output as no soundcards detected . so , i implemented step 4 from that guide, it provides a output of all hardware devices in my laptop. *-multimedia UNCLAIMED description: Audio device product: Cedar HDMI Audio [Radeon HD 5400/6300 Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list configuration: latency=0 resources: memory:f0040000-f0043fff *-multimedia UNCLAIMED description: Audio device product: 5 Series/3400 Series Chipset High Definition Audio vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 05 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f5e00000-f5e03fff that command displayed output name of the two cards . but , still i have no positive output on $aplay -l. so therfore, i think alsa couldn't detect my sound cards . is there solution to this problem? it could be better,if alsa would channel output from multiple sound cards ? how should install and configure alsa such that detects HDMI cable as soon i connect to my HD tv? is it possible to alsa and pluseaudio 2.0 to co-exist, if so how?

    Read the article

  • Oracle OpenWorld Highlights

    - by Doug Reid
    We are in the final days of Oracle OpenWorld 2012 and the data integration team have been hard at work giving sessions, meeting customers, demonstrating product and conducting hands-on labs.    It has been a great conference, but the best part is meeting our customers and learning about all the great implementations of our products.  Wednesday was the last day that the exhibition hall was open and attendees were getting in their final opportunities to see our products and meet with the product management team.   Two hours before the close of the hall, people lined up to learn about GoldenGate 11gR2, Monitor, Adapters, Veridata, and all the different use cases.    Here's a picture of Sjaak Vossepoel, who is our DIS Sales Consulting Manager for EMEA speaking to a potential customer on the options of using Oracle GoldenGate for heterogenous data replication.  Over the last two days, the GoldenGate team ran two labs; Introduction to Oracle GoldenGate Veridata and Deep Dive into Oracle GoldenGate.   Both of the labs were completely booked out and unfortunately we had to turn away people.   BUT,  all of our labs were recorded recently so if you were not able to get into the lab or did not have enough time to complete your labs, visit youtube.com/oraclegoldengate to see a  complete recording of the labs we used at OpenWorld plus more.  Here are a couple pictures from the Deep Dive into Oracle GoldenGate lead by Chis Lawless from the Product Management team.   Thanks to the GoldenGate Hands-on Lab team for putting on a great session!!! We will post more information about where you can find additional details on OpenWorld as they become public.   

    Read the article

  • How can my team avoid frequent errors after refactoring?

    - by SDD64
    to give you a little background: I work for a company with roughly twelve Ruby on Rails developers (+/- interns). Remote work is common. Our product is made out of two parts: a rather fat core, and thin up to big customer projects built upon it. Customer projects usually expand the core. Overwriting of key features does not happen. I might add that the core has some rather bad parts that are in urgent need of refactorings. There are specs, but mostly for the customer projects. The worst part of the core are untested (as it should be...). The developers are split into two teams, working with one or two PO for each sprint. Usually, one customer project is strictly associated with one of the teams and POs. Now our problem: Rather frequently, we break each others stuff. Some one from Team A expands or refactors the core feature Y, causing unexpected errors for one of Team B's customer projects. Mostly, the changes are not announced over the teams, so the bugs hit almost always unexpected. Team B, including the PO, thought about feature Y to be stable and did not test it before releasing, unaware of the changes. How to get rid of those problems? What kind of 'announcement technique' can you recommend me?

    Read the article

  • How to Create tree type CVL in Content server(UCM)

    - by rajeev.y.ranjan-oracle
    Steps to create tree choice list:1)Create a table "tblStates" with column "stateID" and "stateName". Click on "ADD Recommended".2) Create another table "tblCities with columns "cityID", "stateID" and "cityName".3)Then create two views on these tables namely "tblstateview" and "tblcityview".3)In "StateView" added two rows with values as JH and MH in stateID column.Jharkhand and Maharastra in stateName.4)Similarly in tblcityview added two rows with values as:BO and RA in cityID column.JH and MH in stateID columnBokaro and Mumbai in cityname column.5)Created relationship with Parentinfo "tblStates" and stateID and  childinfo with tblCities and stateID.6)Created metadata by name "Newtest"Enable option list,go to the configure ,Select use tree,Click on go edit definition 7)Tree Definition at level 1: a)Choose" tblstateView"b)Choose relation "newstatecity"At Level2:a)Choose cityView.Log out of the NativeUI and ContentUI and test the tree created by name "Newtest".

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >