Search Results

Search found 346 results on 14 pages for 'conversions'.

Page 3/14 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Should you correct compiler warnings about type conversions using explicit typecasts?

    - by BastiBechtold
    In my current project, the compiler shows hundreds of warnings about type conversions. There is a lot of code like this iVar = fVar1*fVar2/fVar3; // or even iVar = fVar1*fVar2/fVar3+.5f; which intentionally assign float values to int. Of course, I could fix these warnings using iVar = int(...); but that looks kind of ugly. Would you rather live with the ugliness or live with the warnings? Or is there even a clean solution?

    Read the article

  • Why do Java and C# not have implicit conversions to boolean?

    - by Shaun
    Since I started Java it's been very aggravating for me that it doesn't support implicit conversions from numeric types to booleans, so you can't do things like: if (flags & 0x80) { ... } instead you have to go through this lunacy: if ((flags & 0x80) != 0) { ... } It's the same with null and objects. Every other C-like language I know including JavaScript allows it, so I thought Java was just moronic, but I've just discovered that C# is the same (at least for numbers, don't know about null/objects): http://msdn.microsoft.com/en-us/library/c8f5xwh7(VS.71).aspx Microsoft changed it on purpose from C++, so why? Clearly I'm missing something. Why change (what I thought was) the most natural thing in the world to make it longer to type? What on Earth is wrong with it?

    Read the article

  • Dell XPS 15 (L502x) sound problem

    - by lauramolenaar
    I have a problem with my sound on my Dell XPS 15. First, when I had Windows 7, my sound was pretty good (I have a JBL 2.1 speaker system with Waves Maxx audio), but since I installed Ubuntu 12.04 it sounded very cheap and as if I put my laptop in a tin can. I've already tried installing alsa-hda-dkms from the alsa-daily ppa (http://ppa.launchpad.net/ubuntu-audio-dev/alsa-daily) This is my audio controller: laura@laura-XPS-L502X:~$ lspci -v | grep -A7 -i audio 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: Dell Device 04b6 Flags: bus master, fast devsel, latency 0, IRQ 51 Memory at f1c00000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel I hope you can help me, and you can always ask me for more information. Result of aplay -l: **** List of PLAYBACK Hardware Devices **** card 0: PCH [HDA Intel PCH], device 0: ALC665 Analog [ALC665 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 1: ALC665 Digital [ALC665 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 Result of aplay -L: default Playback/recording through the PulseAudio sound server sysdefault:CARD=PCH HDA Intel PCH, ALC665 Analog Default Audio Device front:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Front speakers surround40:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Digital IEC958 (S/PDIF) Digital Audio Output hdmi:CARD=PCH,DEV=0 HDA Intel PCH, HDMI 0 HDMI Audio Output dmix:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample mixing device dmix:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample mixing device dmix:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample mixing device dsnoop:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample snooping device dsnoop:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample snooping device dsnoop:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample snooping device hw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct hardware device without any conversions hw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct hardware device without any conversions hw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct hardware device without any conversions plughw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Hardware device with all software conversions plughw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Hardware device with all software conversions plughw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Hardware device with all software conversions

    Read the article

  • SQL Authority News – Download SQL Server Data Type Conversion Chart

    - by pinaldave
    Datatypes are very important concepts of SQL Server and there are quite often need to convert them from one datatypes to another datatype. I have seen that deveoper often get confused when they have to convert the datatype. There are two important concept when it is about datatype conversion. Implicit Conversion: Implicit conversions are those conversions that occur without specifying either the CAST or CONVERT function. Explicit Conversions: Explicit conversions are those conversions that require the CAST or CONVERT function to be specified. What it means is that if you are trying to convert value from datetime2 to time or from tinyint to int, SQL Server will automatically convert (implicit conversation) for you. However, if you are attempting to convert timestamp to smalldatetime or datetime to int you will need to explicitely convert them using either CAST or CONVERT function as well appropriate parameters. Let us see a quick example of Implict Conversion and Explict Conversion. Implicit Conversion: Explicit Conversion: You can see from above example that how we need both of the types of conversion in different situation. There are so many different datatypes and it is humanly impossible to know which datatype require implicit and which require explicit conversion. Additionally there are cases when the conversion is not possible as well. Microsoft have published a chart where the grid displays various conversion possibilities as well a quick guide. Download SQL Server Data Type Conversion Chart Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Help with debugging COM errors? (.mdi to .pdf file conversions using Microsoft Office Document Imagi

    - by RyanW
    I thought I had a working solution for converting .mdi files to PDF using the Microsoft Office Document Imaging object model. The solution is in a Windows Service, but now I'm running into some errors that I'm having trouble tracking down info on. The exception I get is: The server threw an exception. (Exception from HRESULT: 0x80010105 (RPC_E_SERVERFAULT)) System.Runtime.InteropServices.COMException (0x80010105): The server threw an exception. (Exception from HRESULT: 0x80010105 (RPC_E_SERVERFAULT)) at MODI.DocumentClass.Create(String FileOpen) at DocumentStore.Mdi2PDF(String path, String newPath) Then, in the Event Viewer there is the following Application error: Faulting application MyWindowsServiceName.exe, version 1.0.0.0, time stamp 0x4b97f185, faulting module mso.dll, version 12.0.6425.1000, time stamp 0x49d65443, exception code 0xc0000005, fault offset 0x0000bd8e, process id 0xa5c, application start time 0x01cac08cf032914b. Here's the method that is doing the conversion: private int? Mdi2PDF(String path, String newPath) { int? pageCount = null; string tmpTif = Path.GetTempFileName(); MODI.Document mdiDoc = new MODI.Document(); mdiDoc.Create(path); mdiDoc.SaveAs(tmpTif, MODI.MiFILE_FORMAT.miFILE_FORMAT_TIFF_LOSSLESS, MODI.MiCOMP_LEVEL.miCOMP_LEVEL_HIGH); mdiDoc.Close(false); pageCount = Tiff2PDF(tmpTif, newPath); if (File.Exists(tmpTif)) File.Delete(tmpTif); return pageCount; } I removed all threading from the service invoking this, so that only the primary thread was initializing the MODI object, but still got the error, so it doesn't appear to be threading related. I also built a a console apps converting hundreds of documents and DID NOT get the exception. So, it seems to be caused by creating too many instances of the MODI object, but only instantiated within a Service? Doesn't quite make sense. Anybody have any clues about these errors and how to debug them further?

    Read the article

  • Why aren't operator conversions implicitly called for templated functions? (C++)

    - by John Gordon
    I have the following code: template <class T> struct pointer { operator pointer<const T>() const; }; void f(pointer<const float>); template <typename U> void tf(pointer<const float>); void g() { pointer<float> ptr; f(ptr); tf(ptr); } When I compile the code with gcc 4.3.3 I get a message (aaa.cc:17: error: no matching function for call to ‘tf(pointer<float>&)’) indicating that the compiler called 'operator pointer<const T>' for the non-templated function f(), but didn't for the templated function tf(). Why and is there any workaround short of overloading tf() with a const and non-const version? Thanks in advance for any help.

    Read the article

  • Can I turn off implicit Python unicode conversions to find my mixed-strings bugs?

    - by Tal Weiss
    When profiling our code I was surprised to find millions of calls to C:\Python26\lib\encodings\utf_8.py:15(decode) I started debugging and found that across our code base there are many small bugs, usually comparing a string to a unicode or adding a sting and a unicode. Python graciously decodes the strings and performs the following operations in unicode. How kind. But expensive! I am fluent in unicode, having read Joel Spolsky and Dive Into Python... I try to keep our code internals in unicode only. My question - can I turn off this pythonic nice-guy behavior? At least until I find all these bugs and fix them (usually by adding a u'u')? Some of them are extremely hard to find (a variable that is sometimes a string...). Python 2.6.5 (and I can't switch to 3.x).

    Read the article

  • C#/.NET Little Pitfalls: The Dangers of Casting Boxed Values

    - by James Michael Hare
    Starting a new series to parallel the Little Wonders series.  In this series, I will examine some of the small pitfalls that can occasionally trip up developers. Introduction: Of Casts and Conversions What happens when we try to assign from an int and a double and vice-versa? 1: double pi = 3.14; 2: int theAnswer = 42; 3:  4: // implicit widening conversion, compiles! 5: double doubleAnswer = theAnswer; 6:  7: // implicit narrowing conversion, compiler error! 8: int intPi = pi; As you can see from the comments above, a conversion from a value type where there is no potential data loss is can be done with an implicit conversion.  However, when converting from one value type to another may result in a loss of data, you must make the conversion explicit so the compiler knows you accept this risk.  That is why the conversion from double to int will not compile with an implicit conversion, we can make the conversion explicit by adding a cast: 1: // explicit narrowing conversion using a cast, compiler 2: // succeeds, but results may have data loss: 3: int intPi = (int)pi; So for value types, the conversions (implicit and explicit) both convert the original value to a new value of the given type.  With widening and narrowing references, however, this is not the case.  Converting reference types is a bit different from converting value types.  First of all when you perform a widening or narrowing you don’t really convert the instance of the object, you just convert the reference itself to the wider or narrower reference type, but both the original and new reference type both refer back to the same object. Secondly, widening and narrowing for reference types refers the going down and up the class hierarchy instead of referring to precision as in value types.  That is, a narrowing conversion for a reference type means you are going down the class hierarchy (for example from Shape to Square) whereas a widening conversion means you are going up the class hierarchy (from Square to Shape).  1: var square = new Square(); 2:  3: // implicitly convers because all squares are shapes 4: // (that is, all subclasses can be referenced by a superclass reference) 5: Shape myShape = square; 6:  7: // implicit conversion not possible, not all shapes are squares! 8: // (that is, not all superclasses can be referenced by a subclass reference) 9: Square mySquare = (Square) myShape; So we had to cast the Shape back to Square because at that point the compiler has no way of knowing until runtime whether the Shape in question is truly a Square.  But, because the compiler knows that it’s possible for a Shape to be a Square, it will compile.  However, if the object referenced by myShape is not truly a Square at runtime, you will get an invalid cast exception. Of course, there are other forms of conversions as well such as user-specified conversions and helper class conversions which are beyond the scope of this post.  The main thing we want to focus on is this seemingly innocuous casting method of widening and narrowing conversions that we come to depend on every day and, in some cases, can bite us if we don’t fully understand what is going on!  The Pitfall: Conversions on Boxed Value Types Can Fail What if you saw the following code and – knowing nothing else – you were asked if it was legal or not, what would you think: 1: // assuming x is defined above this and this 2: // assignment is syntactically legal. 3: x = 3.14; 4:  5: // convert 3.14 to int. 6: int truncated = (int)x; You may think that since x is obviously a double (can’t be a float) because 3.14 is a double literal, but this is inaccurate.  Our x could also be dynamic and this would work as well, or there could be user-defined conversions in play.  But there is another, even simpler option that can often bite us: what if x is object? 1: object x; 2:  3: x = 3.14; 4:  5: int truncated = (int) x; On the surface, this seems fine.  We have a double and we place it into an object which can be done implicitly through boxing (no cast) because all types inherit from object.  Then we cast it to int.  This theoretically should be possible because we know we can explicitly convert a double to an int through a conversion process which involves truncation. But here’s the pitfall: when casting an object to another type, we are casting a reference type, not a value type!  This means that it will attempt to see at runtime if the value boxed and referred to by x is of type int or derived from type int.  Since it obviously isn’t (it’s a double after all) we get an invalid cast exception! Now, you may say this looks awfully contrived, but in truth we can run into this a lot if we’re not careful.  Consider using an IDataReader to read from a database, and then attempting to select a result row of a particular column type: 1: using (var connection = new SqlConnection("some connection string")) 2: using (var command = new SqlCommand("select * from employee", connection)) 3: using (var reader = command.ExecuteReader()) 4: { 5: while (reader.Read()) 6: { 7: // if the salary is not an int32 in the SQL database, this is an error! 8: // doesn't matter if short, long, double, float, reader [] returns object! 9: total += (int) reader["annual_salary"]; 10: } 11: } Notice that since the reader indexer returns object, if we attempt to convert using a cast to a type, we have to make darn sure we use the true, actual type or this will fail!  If the SQL database column is a double, float, short, etc this will fail at runtime with an invalid cast exception because it attempts to convert the object reference! So, how do you get around this?  There are two ways, you could first cast the object to its actual type (double), and then do a narrowing cast to on the value to int.  Or you could use a helper class like Convert which analyzes the actual run-time type and will perform a conversion as long as the type implements IConvertible. 1: object x; 2:  3: x = 3.14; 4:  5: // if you want to cast, must cast out of object to double, then 6: // cast convert. 7: int truncated = (int)(double) x; 8:  9: // or you can call a helper class like Convert which examines runtime 10: // type of the value being converted 11: int anotherTruncated = Convert.ToInt32(x); Summary You should always be careful when performing a conversion cast from values boxed in object that you are actually casting to the true type (or a sub-type). Since casting from object is a widening of the reference, be careful that you either know the exact, explicit type you expect to be held in the object, or instead avoid the cast and use a helper class to perform a safe conversion to the type you desire. Technorati Tags: C#,.NET,Pitfalls,Little Pitfalls,BlackRabbitCoder

    Read the article

  • Google Analytics Goal Tracking for Sub-Domains?

    - by Hasan Khan
    I am trying to track goals in Google Analytics for a website that has the goal URL on a sub-domain. The main domain for example is: domain.com and the sub-domain is my.domain.com. I have Google Analytics configured to track domains and all sub-domains and I've eve set up an advanced filter so I can see traffic to my sub-domains in Analytics. However, in goal tracking, you're supposed to put in the website URL after the front (so if it were domain.com/conversions/ you'd put in just /conversions/). However, since for me it would be my.domain.com/conversions/, how would I input that URL into Analytics to track? Would Analytics automatically determine the URL to be on the sub-domain? Thanks!

    Read the article

  • Developing Schema Compare for Oracle (Part 4): Script Configuration

    - by Simon Cooper
    If you've had a chance to play around with the Schema Compare for Oracle beta, you may have come across this screen in the synchronization wizard: This screen is one of the few screens that, along with the project configuration form, doesn't come from SQL Compare. This screen was designed to solve a couple of issues that, although aren't specific to Oracle, are much more of a problem than on SQL Server: Datatype conversions and NOT NULL columns. 1. Datatype conversions SQL Server is generally quite forgiving when it comes to datatype conversions using ALTER TABLE. For example, you can convert from a VARCHAR to INT using ALTER TABLE as long as all the character values are parsable as integers. Oracle, on the other hand, only allows ALTER TABLE conversions that don't change the internal data format. Essentially, every change that requires an actual datatype conversion has to be done using a rebuild with a conversion function. That's OK, as we can simply hard-code the various conversion functions for the valid datatype conversions and insert those into the rebuild SELECT list. However, as there always is with Oracle, there's a catch. Have a look at the NUMTODSINTERVAL function. As well as specifying the value (or column) to convert, you have to specify an interval_unit, which tells oracle how to interpret the input number. We can't hardcode a default for this parameter, as it is entirely dependent on the user's data context! So, in order to convert NUMBER to INTERVAL DAY TO SECOND/INTERVAL YEAR TO MONTH, we need to have feedback from the user as to what to put in this parameter while we're generating the sync script - this requires a new step in the engine action/script generation to insert these values into the script, as well as new UI to allow the user to specify these values in a sensible fashion. In implementing the engine and UI infrastructure to allow this it made much more sense to implement it for any rebuild datatype conversion, not just NUMBER to INTERVALs. For conversions which we can do, we pre-fill the 'value' box with the appropriate function from the documentation. The user can also type in arbitary SQL expressions, which allows the user to specify optional format parameters for the relevant conversion functions, or indeed call their own functions to convert between values that don't have a built-in conversion defined. As the value gets inserted as-is into the rebuild SELECT list, any expression that is valid in that context can be specified as the conversion value. 2. NOT NULL columns Another problem that is solved by the new step in the sync wizard is adding a NOT NULL column to a table. If the table contains data (as most database tables do), you can't just add a NOT NULL column, as Oracle doesn't know what value to put in the new column for existing rows - the DDL statement will fail. There are actually 3 separate scenarios for this problem that have separate solutions within the engine: Adding a NOT NULL column to a table without a rebuild Here, the workaround is to add a column default with an appropriate value to the column you're adding: ALTER TABLE tbl1 ADD newcol NUMBER DEFAULT <value> NOT NULL; Note, however, there is something to bear in mind about this solution; once specified on a column, a default cannot be removed. To 'remove' a default from a column you change it to have a default of NULL, hence there's code in the engine to treat a NULL default the same as no default at all. Adding a NOT NULL column to a table, where a separate change forced a table rebuild Fortunately, in this case, a column default is not required - we can simply insert the default value into the rebuild SELECT clause. Changing an existing NULL to a NOT NULL column To implement this, we run an UPDATE command before the ALTER TABLE to change all the NULLs in the column to the required default value. For all three, we need some way of allowing the user to specify a default value to use instead of NULL; as this is essentially the same problem as datatype conversion (inserting values into the sync script), we can re-use the UI and engine implementation of datatype conversion values. We also provide the option to alter the new column to allow NULLs, or to ignore the problem completely. Note that there is the same (long-running) problem in SQL Compare, but it is much more of an issue in Oracle as you cannot easily roll back executed DDL statements if the script fails at some point during execution. Furthermore, the engine of SQL Compare is far less conducive to inserting user-supplied values into the generated script. As we're writing the Schema Compare engine from scratch, we used what we learnt from the SQL Compare engine and designed it to be far more modular, which makes inserting procedures like this much easier.

    Read the article

  • Windows Service Fails on Launch

    - by Jeff
    I'm trying to write a windows service. It installs fine, but fails when I run it with the following exception. I've searched for the string "MyNewProgramService", but I can't find any conversions that would throw this error. I've also added try/catch blocks to a bunch of code with custom exception handling without finding where this exception is occuring. I'm thinking it's somewhere in the auto-generated configuartion/setup code. Any ideas? Event Type: Error Event Source: MyNewProgram Event Category: None Event ID: 0 Date: 4/15/2010 Time: 12:48:34 PM User: N/A Computer: 20F7KF1 Description: Service cannot be started. System.InvalidCastException: Conversion from string "MyNewProgramService" to type 'Integer' is not valid. ---> System.FormatException: Input string was not in a correct format. at Microsoft.VisualBasic.CompilerServices.Conversions.ParseDouble(String Value, NumberFormatInfo NumberFormat) at Microsoft.VisualBasic.CompilerServices.Conversions.ToInteger(String Value) --- End of inner exception stack trace --- at Microsoft.VisualBasic.CompilerServices.Conversions.ToInteger(String Value) at TaskManagerFailureHandlerService.MyNewProgramService.OnStart(String[] args) at System.ServiceProcess.ServiceBase.ServiceQueuedMainCallback(Object state)

    Read the article

  • MYSQL sum() for distinct rows

    - by makeee
    I'm looking for help using sum() in my SQL query (not posting full query since the scenario is fairly simple). I have COUNT(DISTINCT conversions.id) in my query. I use DISTINCT because I'm doing "group by" for multiple columns and this ensures the same row is not counted more than once. Now I want to add: SUM(conversions.value) as conversion_value The problem is that the "value" for each row is counted more than once (due to the multiple group bys) I basically want to do SUM(conversions.value) for each DISTINCT conversions.id. Is that possible?

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • No HDMI Audio with GeForce 9600GT and nForce board

    - by Bobby
    I've been trying to get HDMI with sound working for the last few days, and I'm a little bit out of ideas. (I've verified that the hardware/Setup works via Windows.) aplay does not list my HDMI device: $ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: NVidia [HDA NVidia], device 0: ALC662 rev1 Analog [ALC662 rev1 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: NVidia [HDA NVidia], device 1: ALC662 rev1 Digital [ALC662 rev1 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 I've already compiled the alsa drivers (1.0.24) from a snapshot (with --with-oss=no) and added the line options snd-hda-intel model=auto # Tried 3stack-dig and 6stack-dig too to /etc/modprobe.d/alsa-base.conf. Still, the device does not show up. If it is important, the HDMI TV is at the moment not configured to be part of the X session (I've tried that to, at least with X restart, and it didn't change anything). What did I miss? $ lspci 00:00.0 Host bridge: nVidia Corporation Device 07c3 (rev a2) 00:00.1 RAM memory: nVidia Corporation nForce 630i memory controller (rev a2) 00:01.0 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.1 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.2 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.3 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.4 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.5 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.6 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:02.0 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:03.0 ISA bridge: nVidia Corporation MCP73 LPC Bridge (rev a2) 00:03.1 SMBus: nVidia Corporation MCP73 SMBus (rev a1) 00:03.2 RAM memory: nVidia Corporation MCP73 Memory Controller (rev a1) 00:03.4 RAM memory: nVidia Corporation MCP73 Memory Controller (rev a1) 00:04.0 USB Controller: nVidia Corporation GeForce 7100/nForce 630i USB (rev a1) 00:04.1 USB Controller: nVidia Corporation MCP73 [nForce 630i] USB 2.0 Controller (EHCI) (rev a1) 00:08.0 IDE interface: nVidia Corporation MCP73 IDE (rev a1) 00:09.0 Audio device: nVidia Corporation MCP73 High Definition Audio (rev a1) 00:0a.0 PCI bridge: nVidia Corporation MCP73 PCI Express bridge (rev a1) 00:0b.0 PCI bridge: nVidia Corporation MCP73 PCI Express bridge (rev a1) 00:0c.0 PCI bridge: nVidia Corporation MCP73 PCI Express bridge (rev a1) 00:0d.0 PCI bridge: nVidia Corporation MCP73 PCI Express bridge (rev a1) 00:0e.0 IDE interface: nVidia Corporation MCP73 IDE (rev a2) 00:0f.0 Ethernet controller: nVidia Corporation MCP73 Ethernet (rev a2) 02:00.0 VGA compatible controller: nVidia Corporation G94 [GeForce 9600 GT] (rev a1)   $ aplay -L default pulse Playback/recording through the PulseAudio sound server front:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Front speakers surround40:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Digital IEC958 (S/PDIF) Digital Audio Output dmix:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct sample mixing device dmix:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct sample mixing device dsnoop:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct sample snooping device dsnoop:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct sample snooping device hw:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct hardware device without any conversions hw:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct hardware device without any conversions plughw:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Hardware device with all software conversions plughw:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Hardware device with all software conversions

    Read the article

  • Overriding to_xml for collection of ActiveRecord objects

    - by Chirantan
    Okay, I know you can override the to_xml method for a single instance of ActiveRecord object and it works just fine for me. But how would I go about overriding the to_xml method for collection of objects? Suppose for Task model instance, I implemented to_xml which looks like this. def to_xml super(:methods => [:tag_list], :include => {:project => {:include => {:folder => {}}}, :folder => {}}) end Works just fine when a single task is to be serialized to xml. But when my code runs for collection of tasks, like in the following piece of code render :xml => @tasks.to_xml I get wrong number of arguments (1 for 0) /home/chirantan/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/array/conversions.rb:189:in `to_xml' /home/chirantan/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/array/conversions.rb:189:in `to_xml' /home/chirantan/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/array/conversions.rb:189:in `each' /home/chirantan/.gem/ruby/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/array/conversions.rb:189:in `to_xml' /var/lib/gems/1.8/gems/builder-2.1.2/lib/builder/xmlbase.rb:134:in `call' /var/lib/gems/1.8/gems/builder-2.1.2/lib/builder/xmlbase.rb:134:in `_nested_structures' /var/lib/gems/1.8/gems/builder-2.1.2/lib/builder/xmlbase.rb:58:in `method_missing' /var/lib/gems/1.8/gems/builder-2.1.2/lib/builder/xmlbase.rb:31:in `tag!' /~/blah/app/controllers/tasks_controller.rb:412:in `completed' How do I make this work?

    Read the article

  • Intra-Unicode "lean" Encoding Converters

    - by Mystagogue
    Windows provides encoding conversion functions ("MultiByteToWideChar" and "WideCharToMultiByte") which are capable of UTF-8 to/from UTF-16 conversions, among other things. But I've seen people offer home-grown 30 to 40 line functions that claim also to perform UTF-8 / UTF-16 encoding conversions. My question is, how reliable are such tiny converters? Can such a tiny amount of code handle problems such as converting a UTF-16 surrogate pair (such as ) into a UTF-8 single four byte sequence (rather than making the mistake of converting into a pair of three byte sequences)? Can they correctly spot "unpaired" surrogate input, and provide an error? In short, are such tiny converters mere toys, or can they be taken seriously? For that matter, why does unicode.org seemingly offer no advice on an algorithm for accomplishing such conversions?

    Read the article

  • LLBLGen Pro v3.5 has been released!

    - by FransBouma
    Last weekend we released LLBLGen Pro v3.5! Below the list of what's new in this release. Of course, not everything is on this list, like the large amount of work we put in refactoring the runtime framework. The refactoring was necessary because our framework has two paradigms which are added to the framework at a different time, and from a design perspective in the wrong order (the paradigm we added first, SelfServicing, should have been built on top of Adapter, the other paradigm, which was added more than a year after the first released version). The refactoring made sure the framework re-uses more code across the two paradigms (they already shared a lot of code) and is better prepared for the future. We're not done yet, but refactoring a massive framework like ours without breaking interfaces and existing applications is ... a bit of a challenge ;) To celebrate the release of v3.5, we give every customer a 30% discount! Use the coupon code NR1ORM with your order :) The full list of what's new: Designer Rule based .NET Attribute definitions. It's now possible to specify a rule using fine-grained expressions with an attribute definition to define which elements of a given type will receive the attribute definition. Rules can be assigned to attribute definitions on the project level, to make it even easier to define attribute definitions in bulk for many elements in the project. More information... Revamped Project Settings dialog. Multiple project related properties and settings dialogs have been merged into a single dialog called Project Settings, which makes it easier to configure the various settings related to project elements. It also makes it easier to find features previously not used  by many (e.g. type conversions) More information... Home tab with Quick Start Guides. To make new users feel right at home, we added a home tab with quick start guides which guide you through four main use cases of the designer. System Type Converters. Many common conversions have been implemented by default in system type converters so users don't have to develop their own type converters anymore for these type conversions. Bulk Element Setting Manipulator. To change setting values for multiple project elements, it was a little cumbersome to do that without a lot of clicking and opening various editors. This dialog makes changing settings for multiple elements very easy. EDMX Importer. It's now possible to import entity model data information from an existing Entity Framework EDMX file. Other changes and fixes See for the full list of changes and fixes the online documentation. LLBLGen Pro Runtime Framework WCF Data Services (OData) support has been added. It's now possible to use your LLBLGen Pro runtime framework powered domain layer in a WCF Data Services application using the VS.NET tools for WCF Data Services. WCF Data Services is a Microsoft technology for .NET 4 to expose your domain model using OData. More information... New query specification and execution API: QuerySpec. QuerySpec is our new query specification and execution API as an alternative to Linq and our more low-level API. It's build, like our Linq provider, on top of our lower-level API. More information... SQL Server 2012 support. The SQL Server DQE allows paging using the new SQL Server 2012 style. More information... System Type converters. For a common set of types the LLBLGen Pro runtime framework contains built-in type conversions so you don't need to write your own type converters anymore. Public/NonPublic property support. It's now possible to mark a field / navigator as non-public which is reflected in the runtime framework as an internal/friend property instead of a public property. This way you can hide properties from the public interface of a generated class and still access it through code added to the generated code base. FULL JOIN support. It's now possible to perform FULL JOIN joins using the native query api and QuerySpec. It's left to the developer to check whether the used target database supports FULL (OUTER) JOINs. Using a FULL JOIN with entity fetches is not recommended, and should only be used when both participants in the join aren't the target of the fetch. Dependency Injection Tracing. It's now possible to enable tracing on dependency injection. Enable tracing at level '4' on the traceswitch 'ORMGeneral'. This will emit trace information about which instance of which type got an instance of type T injected into property P. Entity Instances in projections in Linq. It's now possible to return an entity instance in a custom Linq projection. It's now also possible to pass this instance to a method inside the query projection. Inheritance fully supported in this construct. Entity Framework support The Entity Framework has been updated in the recent year with code-first support and a new simpler context api: DbContext (with DbSet). The amount of code to generate is smaller and the context simpler. LLBLGen Pro v3.5 comes with support for DbContext and DbSet and generates code which utilizes these new classes. NHibernate support NHibernate v3.2+ built-in proxy factory factory support. By default the built-in ProxyFactoryFactory is selected. FluentNHibernate Session Manager uses 1.2 syntax. Fluent NHibernate mappings generate a SessionManager which uses the v1.2 syntax for the ProxyFactoryFactory location Optionally emit schema / catalog name in mappings Two settings have been added which allow the user to control whether the catalog name and/or schema name as known in the project in the designer is emitted into the mappings.

    Read the article

  • How one decision can turn web services to hell

    - by DigiMortal
    In this posting I will show you how one stupid decision may turn developers life to hell. There is a project where bunch of complex applications exchange data frequently and it is very hard to change something without additional expenses. Well, one analyst thought that string is silver bullet of web services. Read what happened. Bad bad mistake In the early stages of integration project there was analyst who also established architecture and technical design for web services. There was one very bad mistake this analyst made: All data must be converted to strings before exchange! Yes, that’s correct, this was the requirement. All integers, decimals and dates are coming in and going out as strings. There was also explanation for this requirement: This way we can avoid data type conversion errors! Well, this guy works somewhere else already and I hope he works in some burger restaurant – far away from computers. Consequences If you first look at this requirement it may seem like little annoying piece of crap you can easily survive. But let’s see the real consequences one stupid decision can cause: hell load of data conversions are done by receiving applications and SSIS packages, SSIS packages are not error prone and they depend heavily on strings they get from different services, there are more than one format per type that is used in different services, for larger amounts of data all these conversion tasks slow down the work of integration packages, practically all developers have been in hurry with some SSIS import tasks and some fields that are not used in different calculations in SSAS cube are imported without data conversions (by example, some prices are strings in format “1.021 $”). The most painful problem for developers is the part of data conversions because they don’t expect that there is such a stupid requirement stated and therefore they are not able to estimate the time their tasks take on these web services. Also developers must be prepared for cases when suddenly some service sends data that is not in acceptable format and they must solve the problems ASAP. This puts unexpected load on developers and they are not very happy with it because they can’t understand why they have to live with this horror if it is possible to fix. What to do if you see something like this? Well, explain the problem to customer and demand special tasks to project schedule to get this mess solved before going on with new developments. It is cheaper to solve the problems now that later.

    Read the article

  • Dashboard to aggregate Google Analytics, Facebook, YouTube etc tracking data?

    - by Richard
    I'd like to see as much tracking data as possible about my online presence, in one single dashboard - so views/conversions from Google Analytics data, the performance of my Facebook campaigns via the Insights API, views/clicks from my YouTube campaigns, etc. This could be as simple as a graph with time on the x-axis, and key indicators from each source on the y-axis (conversions from Analytics, likes on Facebook, views on YouTube, etc). The idea is that I can see customer engagement with each source, over time. I can write my own such dashboard easily enough, but I wondered if there was something off-the-shelf that already did this. Apologies if this isn't the right forum for such a question - would appreciate tips for the best place to ask.

    Read the article

  • Can a single argument constructor with a default value be subject to implicit type conversion

    - by Richard
    I understand the use of the explicit keyword to avoid the implicit type conversions that can occur with a single argument constructor, or with a constructor that has multiple arguments of which only the first does not have a default value. However, I was wondering, does a single argument constructor with a default value behave the same as one without a default value when it comes to implicit conversions?

    Read the article

  • Is there a good date/time API available for Scala?

    - by Erik Engbrecht
    I'm looking for something akin to JodaTime or JSR 310 for Scala that leverages nice Scala features such as operator overloading and doesn't rely on implicit conversions (I have an irrational fear of implicit conversions). I'm aware of http://github.com/jorgeortiz85/scala-time, but it just pimps JodaTime with implicits.

    Read the article

  • Providing *implicit* conversion operator for template specialization

    - by Neil G
    I have a templated sparse_vector<T> class, and I am also using Boost UBLAS. How would I provide implicit conversions between sparse_vector<double> and boost::numeric::ublas::compressed_vector<double>? I would also like to provide similar conversions between std::vector<double> and boost::numeric::ublas::vector<double>. (I am using gcc 4.4 with C++0x enabled.)

    Read the article

  • Converting currencies via intermediate currencies.

    - by chillitom
    class FxRate { string Base { get; set; } string Target { get; set; } double Rate { get; set; } } private IList<FxRate> rates = new List<FxRate> { new FxRate {Base = "EUR", Target = "USD", Rate = 1.3668}, new FxRate {Base = "GBP", Target = "USD", Rate = 1.5039}, new FxRate {Base = "USD", Target = "CHF", Rate = 1.0694}, new FxRate {Base = "CHF", Target = "SEK", Rate = 8.12} // ... }; Given a large yet incomplete list of exchange rates where all currencies appear at least once (either as a target or base currency): What algorithm would I use to be able to derive rates for exchanges that aren't directly listed? I'm looking for a general purpose algorithm of the form: public double Rate(string baseCode, string targetCode, double currency) { return ... } In the example above a derived rate would be GBP-CHF or EUR-SEK (which would require using the conversions for EUR-USD, USD-CHF, CHF-SEK) Whilst I know how to do the conversions by hand I'm looking for a tidy way (perhaps using LINQ) to perform these derived conversions perhaps involving multiple currency hops, what's the nicest way to go about this?

    Read the article

  • How can I tell if I am overusing multi-threading?

    - by exhuma
    NOTE: This is a complete re-write of the question. The text before was way too lengthy and did not get to the point! If you're interested in the original question, you can look it up in the edit history. I currently feel like I am over-using multi-threading. I have 3 types of data, A, B and C. Each A can be converted to multiple Bs and each B can be converted to multiple Cs. I am only interested in treating Cs. I could write this fairly easily with a couple of conversion functions. But I caught myself implementing it with threads, three queues (queue_a, queue_b and queue_c). There are two threads doing the different conversions, and one worker: ConverterA reads from queue_a and writes to queue_b ConverterB reads from queue_b and writes to queue_c Worker handles each element from queue_c The conversions are fairly mundane, and I don't know if this model is too convoluted. But it seems extremely robust to me. Each "converter" can start working even before data has arrived on the queues, and at any time in the code I can just "submit" new As or Bs and it will trigger the conversion pipeline which in turn will trigger a job by the worker thread. Even the resulting code looks simpler. But I still am unsure if I am abusing threads for something simple.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >