Search Results

Search found 14416 results on 577 pages for 'standard reports'.

Page 14/577 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • IDC and Becham Research: New analyst reports and webcast

    - by terrencebarr
    Embedded Java is getting a lot of attention in the analyst community these days. Check out these new analyst reports and a webcast by IDC as well as Beecham Research. IDC published a White Paper titled “Ghost in the Machine: Java for Embedded Development”, and an accompanying webcast recording. Highlights of the White Paper: The embedded systems industry is projected to continue to expand rapidly, reaching $2.1 trillion in 2015 The market for intelligent systems, where Java’s rich set of services are most needed, is projected to grow to 78% of all embedded systems in 2015  Java is widely used in embedded systems and is expected to continue to gain traction in areas where devices present an application platform for developers The free IDC webcast and White Paper can be accessed here. Beecham Research published a report titled “Designing an M2M Platform for the Connected World”. Highlights of the report: The total revenue for M2M Services is projected to double, from almost $15 billion in 2012 to over $30 billion in 2016 The primary driver for M2M solutions is now enabling new services Important trends that are developing are: Enterprise integration – more data and using the data more strategically, new markets in the Internet of Things (IoT), processing large amounts of data in real time (complex event processing) Using the same software development environment for all parts of an M2M solution is a major advantage if the software can be optimized for each part of the solution The free Beecham Research report can be accessed here. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: iot, Java Embedded, M2M, research, webcast

    Read the article

  • IDC and Becham Research: New analyst reports and webcast

    - by terrencebarr
    Embedded Java is getting a lot of attention in the analyst community these days. Check out these new analyst reports and a webcast by IDC as well as Beecham Research. IDC published a White Paper titled “Ghost in the Machine: Java for Embedded Development”, and an accompanying webcast recording. Highlights of the White Paper: The embedded systems industry is projected to continue to expand rapidly, reaching $2.1 trillion in 2015 The market for intelligent systems, where Java’s rich set of services are most needed, is projected to grow to 78% of all embedded systems in 2015  Java is widely used in embedded systems and is expected to continue to gain traction in areas where devices present an application platform for developers The free IDC webcast and White Paper can be accessed here. Beecham Research published a report titled “Designing an M2M Platform for the Connected World”. Highlights of the report: The total revenue for M2M Services is projected to double, from almost $15 billion in 2012 to over $30 billion in 2016 The primary driver for M2M solutions is now enabling new services Important trends that are developing are: Enterprise integration – more data and using the data more strategically, new markets in the Internet of Things (IoT), processing large amounts of data in real time (complex event processing) Using the same software development environment for all parts of an M2M solution is a major advantage if the software can be optimized for each part of the solution The free Beecham Research report can be accessed here. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: iot, Java Embedded, M2M, research, webcast

    Read the article

  • Source-control your BI Publisher reports

    - by Dmitry Nefedkin
    Version control systems (VCS) like Subversion, Git and the others has been widely adopted and became the must-have tool in any software development project. Source artifacts and checked out, modified, checked in, all the history of changes is tracked by the VCS.  But what if the development tool stores the source/configuration artifacts not in your laptop's hard drive, but in some shared repository instead? Well, we definitely need a way for export/import our artifacts from/to this repository.   Oracle BI Publisher report development approach is based on such a shared repository model (catalog), and starting from BI Publisher 11.1.1.5 Oracle ships Catalog Utility, which can be utilized to export/import the reports from the command line.  To start using the BI Publisher Catalog Utility you should: Go to the file system of the server where BI Publisher binaries has been installed and locate the following file: <MW_HOME>/Oracle_BI1/clients/bipublisher/BIPCatalogUtil.zip Copy the file to your local filesystem and unzip it. I will refer to this unzipped directory as <BIP_CLIENT_DIR> below If you do not want to pass server BI Publisher server URL, username and password during each invocation, modify the corresponding values inside <BIP_CLIENT_DIR>/config/xmlp-client-config.xml Open the terminal window and go to <BIP_CLIENT_DIR>/bin Make sure that the following environment variables are set: JAVA_HOME, ORACLE_HOME Now it's time to run the utility: if you are on Linux - just run BIPCatalogUtil.sh and pass the parameters according to the utility documentation if you are on MS Windows the bad news are that the command script for MS Windows is missing, and support.oracle.com note 1333726.1 says that a temporary solution is "create a .cmd file by setting up a classpath and copying the same commands from the .sh script". The good news are that I've created this script already,  please download the it from GitHub Hope you will find this utility useful during you day-by-day BI Publisher development. 

    Read the article

  • unable to open SSRS reports from domain IP

    - by Lalit
    Hi, I have developed the SSRS reports. It is running fine locally. but after deployed on the domain server it showing exception XML Parsing Error: no element found Location: http://{MyDomainIP}:{port}/Reports/Pages/Folder.aspx Line Number 1, Column 1: I have deployed these reports under the windows authentication . since I tried to give anonymous access so that I can access it, But in IIS I could not found the virtual directory regarding these reports. so where are they deployed ? I could not understand how it is not permitting to the Admin account to show the reports. Please guide me I am using IIS 6.0 , SQL server2008 R2. I am totally new for the this kind of stuff. Edited: how can we give the anonymous access to the SSRS reports? It should not ask for username and password.I know we can do this by IIS , but some how I can not found my SSRS virtual directory. how can i do that ?

    Read the article

  • Crystal Reports API - chart: "for all records" or "for each record"?

    - by Epaga
    Is there any way to determine whether a chart in Crystal Reports 2008 (using either the RAS SDK or the older RDC API) is set to display values "for each record" or "for all records"? I can get access to a CrystalDecisions.ReportAppServer.ReportDefModel.ChartObject but can't find any API there to access which type of chart it is - "for each" or "for all".

    Read the article

  • Reports in Java. What tool to use?

    - by Tom
    Hi, I need to create some reports, in different formats (xls, pdf, rtf). I am currently using JasperReports, in conjunction with IReport. I have no major complaints about it (except for the cases when IReport messes up my xml files), but i've been having some problems with it, when exporting to xls files and with some "special" characters, such as '&'. Is there a widely use alternative? Is JasperReports the right choice?

    Read the article

  • Using C# with Crystal Reports, How Can I Create 4-Up Subreports?

    - by C. Griffin
    The simplest example that I can provide for what I want to do is this: I need to create a Report, whose only requirement is that I have (4) of the same subreport on the page (imagine 4 portrait-oriented post cards on a page), each quadrant using a separate row from my datatable, yet all 4 are identical in terms of fields. If there are more than 4, it needs to carry over to a new page with the same format. I'm using C# and the built-in Crystal Reports Basic for the task.

    Read the article

  • How to show different icons in Crystal reports depending on the field value?

    - by DarkDeny
    We are using Crystal Report 12 in one of our projects. Currently I need to create report template which should show different icons based on the some field value. That field contains a number, storing some kind of status and I have several icons corresponding some statuses. At the moment I can't figure out how to implement such a thing in Crystal Reports designer. Could someone please help me?

    Read the article

  • Remapping keyboard to get extra cursor keys - but why stick to VIM standard hjkl

    - by Carlo V. Dango
    Inspired from VIM I recently remapped my keyboard layout to get extra keys for cursor movement. Being fluent in both QWERTY and DVORAK, it came quite natural to me to remap the DF and JK keys rather than the VIM standard hjkl keys. Here is my reasoning It enables me to quickly identify cursor keys since F and J are physically marked on my keyboard I'm using two hands for movement rather than one. I guess from DVORAK I learned to appreciate shifting between hands rather than using primarily one hand. It maps well with the Kinesis keyboard mapping http://www.kinesis-ergo.com/advantage.htm that I use occasionally. I feel I'm using my strongest fingers. I don't have to stretch my right index finger to read H as I would using the VIM layout. However, since I am still doing green field explorations on the cursor key remapping, I'd like others to share their experiences and/or criticize my suggested mapping. PS. If you want to toy around with my remapping using Autohotkey here is my script ; extra cursor keys. !d:: Send {Left} <^>!d:: Send {Left} !f:: Send {Right} <^>!f:: Send {Right} !j:: Send {Up} <^>!j:: Send {Up} !k:: Send {Down} <^>!k:: Send {Down} The question Is this mapping sane or is the VIM mapping superior?

    Read the article

  • Can't login to Guest Session or Standard Account?

    - by Rory
    Thanks in advance for any help. Ubuntu 12.04 Up until recently my mother has been using the Guest Session when she logs on; now, the Guest Session will not login. When I try to login to it, it bounces me back to the login screen. I then tried to login on a Standard account (Alberta) which I made prior to not being able to login to Guest, and it turns out I cannot login - it gives the "Invalid Password" error. So then I tried to change her password from my own account (Rory) which is the master account. Under the Login Options where the Password option is, it says "Account disabled" and it will not let me change this; I try to apply a password, get no error, but it just still says "Account disabled". Then I tried to delete the Alberta account altogether, but got this error: I also tried sudo userdel -r Alberta but got this error: "userdel: cannot lock /etc/shadow; try again later." I don't know what to do now. Any help appreciated.

    Read the article

  • After upgrading to 12.04 from 10.10 my mythbuntu standard MCEUSB remote no longer works

    - by keepitsimpleengineer
    I had no problems using my Windows Media Center Remote with 10.10 Mythbuntu, but after upgrading, it no longer affects Mythbuntu. I have verified and re-installed it in Mythbuntu Control Centre. I have used irw to verify the ir buttons actions are properly received by the HTPC. How do I go about fixing this? 3.2.0-26-generic (#41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012) Xorg version: 1.11.3 (16 July 2012 08:06:31PM) GCC: 4.6 (x86_64-linux-gnu) Current updates as of 2012?07?21 $cat /etc/lirc/hardware.con #Chosen Remote Control REMOTE="Windows Media Center Transceivers/Remotes (all)" REMOTE_MODULES="lirc_dev mceusb" REMOTE_DRIVER="" REMOTE_DEVICE="/dev/lirc0" REMOTE_SOCKET="" REMOTE_LIRCD_CONF="mceusb/lircd.conf.mceusb" REMOTE_LIRCD_ARGS="" #Chosen IR Transmitter TRANSMITTER="None" TRANSMITTER_MODULES="" TRANSMITTER_DRIVER="" TRANSMITTER_DEVICE="" TRANSMITTER_SOCKET="" TRANSMITTER_LIRCD_CONF="" TRANSMITTER_LIRCD_ARGS="" #Enable lircd START_LIRCD="true" #Don't start lircmd even if there seems to be a good config file #START_LIRCMD="false" #Try to load appropriate kernel modules LOAD_MODULES="true" # Default configuration files for your hardware if any LIRCMD_CONF="" #Forcing noninteractive reconfiguration #If lirc is to be reconfigured by an external application #that doesn't have a debconf frontend available, the noninteractive #frontend can be invoked and set to parse REMOTE and TRANSMITTER #It will then populate all other variables without any user input #If you would like to configure lirc via standard methods, be sure #to leave this set to "false" FORCE_NONINTERACTIVE_RECONFIGURATION="false" START_LIRCMD="" # lsusb | grep -i infrared Bus 003 Device 002: ID 0471:0815 Philips (or NXP) eHome Infrared Receiver

    Read the article

  • How to ignore certain coding standard errors in PHP CodeSniffer

    - by Tom
    We have a PHP 5 web application and we're currently evaluating PHP CodeSniffer in order to decide whether forcing code standards improves code quality without causing too much of a headache. If it seems good we will add a SVN pre-commit hook to ensure all new files committed on the dev branch are free from coding standard smells. Is there a way to configure PHP codeSniffer to ignore a particular type of error? or get it to treat a certain error as a warning instead? Here an example to demonstrate the issue: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body> <div> <?php echo getTabContent('Programming', 1, $numX, $numY); if (isset($msg)) { echo $msg; } ?> </div> </body> </html> And this is the output of PHP_CodeSniffer: > phpcs test.php -------------------------------------------------------------------------------- FOUND 2 ERROR(S) AND 1 WARNING(S) AFFECTING 3 LINE(S) -------------------------------------------------------------------------------- 1 | WARNING | Line exceeds 85 characters; contains 121 characters 9 | ERROR | Missing file doc comment 11 | ERROR | Line indented incorrectly; expected 0 spaces, found 4 -------------------------------------------------------------------------------- I have a issue with the "Line indented incorrectly" error. I guess it happens because I am mixing the PHP indentation with the HTML indentation. But this makes it more readable doesn't it? (taking into account that I don't have the resouces to move to a MVC framework right now). So I'd like to ignore it please.

    Read the article

  • Standard ratio of cookies to "visitors"?

    - by Jeff Atwood
    As noted in a recent blog post, We see a large discrepancy between Google Analytics "visitors" and Quantcast "visitors". Also, for reasons we have never figured out, Google Analytics just gets larger numbers than Quantcast. Right now GA is showing more visitors (15 million) on stackoverflow.com alone than Quantcast sees on the whole network (14 million): Why? I don’t know. Either Google Analytics loses cookies sometimes, or Quantcast misses visitors. Counting is an inexact science. We think this is because Quantcast uses a more conservative ratio of cookies-to-visitors. Whereas Google Analytics might consider every cookie a "visitor", Quantcast will only consider every 1.24 cookies a "visitor". This makes sense to me, as people may access our sites from multiple computers, multiple browsers, etcetera. I have two closely related questions: Is there an accepted standard ratio of cookies to visitors? This is obviously an inexact science, but is there any emerging rule of thumb? Is there any more accurate way to count "visitors" to a website other than relying on browser cookies? Or is this just always going to be kind of a best-effort estimation crapshoot no matter how you measure it?

    Read the article

  • Standard Network Tiers in a Distributed N-Tier System

    Distributed N-Tier client/server architecture allows for segments of an application to be broken up and distributed across multiple locations on a network.  Listed below are standard tiers in a Distributed N-Tier System. End-User Client Tier The End-User Client is responsible for sending and receiving requests from web servers and other applications servers and translating the responses so that the End-User can interpret the data effectively. The primary roles for this tier are to communicate with servers and translate server responses back to the end-user to interpret. Business-Specific Functions Validate Data Display Data Send Data to Webserver Web Server Tier The Web server tier processes new requests for information coming in from the HTTP and HTTPS ports. This primarily handles the generation of user interfaces and calls the application server when needed to access Data and business logic when needed. Business-specific functions Send Data to application server Format Data for Display Validate Data Application Server Tier The application server stores and executes predefined business logic that is applied to various pieces of data as the business determines. The processed data is then returned back to the Webserver. Additionally, this server directly calls the database to obtain and store any data used by the system Business-Specific Functions Validate Data Process Data Send Data to Database Server Database Server Tier The Database Server is responsible for storing and returning all data need by the calling applications. The primary role for this this server is storage. Data is stored as needed and can be recalled at any point later in time. Business-Specific Functions Insert Data Delete Data Return Data to Application Server

    Read the article

  • Web Service Standard Complexity

    Are we over-standardizing web services and hindering their adoption? No, and in fact I feel that it is helping its adoption in the modern corporate world. Standards, although they can be daunting and tedious, provide a universal framework to which we all can operate in and around. These frameworks provide a common interface for all of to use when interaction with various computing environments so that data can be transfer freely.  Standards are protocols in which computers communicate with one another. If we take this to the living world, the united nations hires interprets for all each countries dignitaries so that they can understand what other countries are talking about. Imagine if the president of the United States wanted to talk to the ruler of China. How would these to communicate? The interpreter would translate data back and forth acting as in intermediary using both standard American English and Chinese. Without knowing the standards in either language no one would be able to communicate. Even though we work within the framework of standards does not mean that we are stuck with these standards. As technology evolves all standards will be out of touch, and when this occurs standards need to be refactored or replaced with new standards that are current with the technology at that time. How else are we as developers and the technology going to grow? What do you guys think?

    Read the article

  • Installing Forms and Reports on a development system

    - by Duncan Mills
    By popular demand I've resurrected / updated one of the old blog postings from Jan Carlin's Blog on GroundSide here. A recent (lengthy) post on the Forms forums chronicles the problems some of you have had installing F&R on a development machine. See the link in the headline of this post for the main one. When installing, here are some points to bear in mind: Download and install Weblogic Server first. http://www.oracle.com/technology/software/products/middleware/index.htmlFind the Forms and Reports (and Disco and Portal) zip files here. Download them to the desktop (or some other temporary directory of your choosing). Unzip both of the two zip files into the same new directory (maybe called 'stage') and check that you have 4 directories in the stage dir when you are finished unzipping: 'Disk1', 'Disk2', 'Disk3' and 'Disk4'. These folders are specified in the zip file structure and must be preserved for the setup executable to work. If you use WinZip and have a right click menu option that say "Extract to here", use that by right click-dragging the zip file onto the newly created directory. Don't use the "Extract to folder %HOME%\Desktop\ofm_pfrd...disk_1of2" option. That will get you into the trouble that was reported early in this thread. Free up as much memory as you can. Stop services and background processes and virus scanners and databases (you don't need a DB to install Forms) and other things lurking about on your machine. You can restart them when the install is done. Around 1.5 GB free real memory should do it. If it doesn't, free up more if you can. Don't change the swap space unless you know what you are doing. Let Windows handle it. A 1 GB machine will likely not be enough. You will likely need at least 2GB of RAM.Start the install with setup.exe from the 'Disk1' directoryChoose the Install and Configure option unless you have a good reason not to.Choose a unique instance name even if you deinstalled and removed the last install. I suggest using 'asinst_20090722_1' (today's date in ISO format with a running incremented number at the end if you install more than two times on a particular day).Unselect Portal and Discoverer and select the Builders you want.Unselect WebCacheUnselect OHS.Unselect the single sign-on option Check for any failures and choose the retry option if any occur. If that doesn't fix the problem, call Oracle Customer Support .

    Read the article

  • Izenda Reports 6.3 Top 10 Features

    - by gt0084e1
    Izenda 6.3 Top 10 New Features and Capabilities 1. Izenda Maps Add-On The Izenda Maps add-on allows rapid visualization of geographic or geo-spacial data.  It is fully integrated with the the rest of Izenda report package and adds a Maps tab which allows users to add interactive maps to their reports. Contact your representative or [email protected] for limited time discounts. Izenda Maps even has rich drill-down capabilities that allow you to dive deeper with a simple hover (also requires dashboards). 2. Streamlined Pie Charts with "Other" Slices The advanced properties of the Pie Chart now allows you to combine the smaller slices into a single "Other" slice. This reduces the visual complexity without throwing off the scale of the chart. Compare the difference below. 3. Combined Bar + Line Charts The Bar chart now allows dual visualization of multiple metrics simultaneously by adding a line for secondary data. Enabled via AdHocSettings.AllowLineOnBar = true; 4. Stacked Bar Charts The stacked bar chart lets you see a breakdown of a measure based on categorical data.  It is enabled with the following code. AdHocSettings.AllowStackedBarChart = true; 5. Self-Joining Data Sources The self-join features allows for parent-child relationships to be accessed from the Data Sources tab. The same table can be used as a secondary child table within the Report Designer. 6. Report Design From Dashboard View Dashboards now sport both view and design icons to allow quick access to both. 7. Field Arithmetic on Dates Differences between dates can now be used as measures with the arithmetic feature. 8. Simplified Multi-Tenancy Integrating with multi-tenant systems is now easier than ever. The following APIs have been added to facilitate common scenarios. AdHocSettings.CurrentUserTenantId = value; AdHocSettings.SchedulerTenantID = value; AdHocSettings.SchedulerTenantField = "AccountID"; 9. Support For SQL 2008 R2 and SQL Azure Izenda now supports the latest version of Microsoft's database as well as the SQL Azure service. 10. Enhanced Performance and Compatibility for Stored Procedures Izenda now supports more stored procedures than ever and runs them faster too.

    Read the article

  • MaxTotalSizeInBytes - Blind spots in Usage file and Web Analytics Reports

    - by Gino Abraham
    Originally posted on: http://geekswithblogs.net/GinoAbraham/archive/2013/10/28/maxtotalsizeinbytes---blind-spots-in-usage-file-and-web-analytics.aspx http://blogs.msdn.com/b/sharepoint_strategery/archive/2012/04/16/usage-file-and-web-analytics-reports-with-blind-spots.aspx In my previous post (Troubleshooting SharePoint 2010 Web Analytics), I referenced a problem that can occur when exceeding the daily partition size for the LoggingDB, which generates the ULS message “[Partition] has exceeded the max bytes”. Below, I wanted to provide some additional info on this particular issue and help identify some options if this occurs. As an aside, this post only applies if you are missing portions of Usage data - think blind spots on intermittent days or user activity regularly sparse for the afternoon/evening. If this fits your scenario - read on. But if Usage logs are outright missing, go check out my Troubleshooting post first.  Background on the problem:The LoggingDB database has a default maximum size of ~6GB. However, SharePoint evenly splits this total size into fixed sized logical partitions – and the number of partitions is defined by the number of days to retain Usage data (by default 14 days). In this case, 14 partitions would be created to account for the 14 days of retention. If the retention were halved to 7 days, the LoggingDBwould be split into 7 corresponding partitions at twice the size. In other words, the partition size is generally defined as [max size for DB] / [number of retention days].Going back to the default scenario, the “max size” for the LoggingDB is 6200000000 bytes (~6GB) and the retention period is 14 days. Using our formula, this would be [~6GB] / [14 days], which equates to 444858368 bytes (~425MB) per partition per day. Again, if the retention were halved to 7 days (which halves the number of partitions), the resulting partition size becomes [~6GB] / [7 days], or ~850MB per partition.From my experience, when the partition size for any given day is exceeded, the usage logging for the remainder of the day is essentially thrown away because SharePoint won’t allow any more to be written to that day’s partition. The only clue that this is occurring (beyond truncated usage data) is an error such as the following that gets reported in the ULS:04/08/2012 09:30:04.78    OWSTIMER.EXE (0x1E24)    0x2C98    SharePoint Foundation    Health    i0m6     High    Table RequestUsage_Partition12 has 444858368 bytes that has exceeded the max bytes 444858368It’s also worth noting that the exact bytes reported (e.g. ‘444858368’ above) may slightly vary among farms. For example, you may instead see 445226812, 439123456, or something else in the ballpark. The exact number itself doesn't matter, but this error message intends to indicates that the reporting usage has exceeded the partition size for the given day.What it means:The error itself is easy to miss, which can lead to substantial gaps in the reporting data (your mileage may vary) if not identified. At this point, I can only advise to periodically check the ULS logs for this message. Down the road, I plan to explore if [Developing a Custom Health Rule] could be leveraged to identify the issue (If you've ever built Custom Health Rules, I'd be interested to hear about your experiences). Overcoming this issue also poses a challenge, with workaround options including:Lower the retentionBecause the partition size is generally defined as [max size] / [number of retention days], the first option is to lower the number of days to retain the data – the lower the retention, the lower the divisor and thus a bigger partition. For example, halving the retention from 14 to 7 days would halve the number of partitions, but double the partition size to ~850MB (e.g. [6200000000 bytes] / [7 days] = ~850GB partitions). Lowering it to 2 days would result in two ~3GB partitions… and so on.Recreate the LoggingDB with an increased sizeThe property MaxTotalSizeInBytes is exposed by OM code for the SPUsageDefinition object and can be updated with the example PowerShell snippet below. However, updating this value has no immediate impact because this size only applies when creating a LoggingDB. Therefore, you must create a newLoggingDB for the Usage Service Application. The gotcha: this effectively deletes all prior Usage databecause the Usage Service Application can only have a single LoggingDB.Here is an example snippet to update the "Page Requests" Usage Definition:$def=Get-SPUsageDefinition -Identity "page requests" $def.MaxTotalSizeInBytes=12400000000 $def.update()Create a new Logging database and attach to the Usage Service Application using the following command: Get-spusageapplication | Set-SPUsageApplication -DatabaseServer <dbServer> -DatabaseName <newDBname> Updated (5/10/2012): Once the new database has been created, you can confirm the setting has truly taken by running the following SQL Query (be sure to replace the database name in the following query with the name provided in the PowerShell above)SELECT * FROM [WSS_UsageApplication].[dbo].[Configuration] WITH (nolock) WHERE ConfigName LIKE 'Max Total Bytes - RequestUsage'

    Read the article

  • Why isnt int pow(int base, int exponent) in the standard C++ libraries?

    - by Dan O
    I feel like I must just be unable to find it. Is there any reason that the c++ pow function does not implement the "power" function for anything except floats and doubles? I know the implementation is trivial, I just feel like I'm doing work that should be in a standard library. A robust power function (ie handles overflow in some consistent, explicit way) is not fun to write.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >