Search Results

Search found 657 results on 27 pages for 'metrics'.

Page 8/27 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Documentation in Oracle Retail Analytics, Release 13.3

    - by Oracle Retail Documentation Team
    The 13.3 Release of Oracle Retail Analytics is now available on the Oracle Software Delivery Cloud and from My Oracle Support. The Oracle Retail Analytics 13.3 release introduced significant new functionality with its new Customer Analytics module. The Customer Analytics module enables you to perform retail analysis of customers and customer segments. Market basket analysis (part of the Customer Analytics module) provides insight into which products have strong affinity with one another. Customer behavior information is obtained from mining sales transaction history, and it is correlated with customer segment attributes to inform promotion strategies. The ability to understand market basket affinities allows marketers to calculate, monitor, and build promotion strategies based on critical metrics such as customer profitability. Highlighted End User Documentation Updates With the addition of Oracle Retail Customer Analytics, the documentation set addresses both modules under the single umbrella name of Oracle Retail Analytics. Note, however, that the modules, Oracle Retail Merchandising Analytics and Oracle Retail Customer Analytics, are licensed separately. To accommodate new functionality, the Retail Analytics suite of documentation has been updated in the following areas, among others: The User Guide has been updated with an overview of Customer Analytics. It also contains a list of metrics associated with Customer Analytics. The Operations Guide provides details on Market Basket Analysis as well as an updated list of APIs. The program reference list now also details the module (Merchandising Analytics or Customer Analytics) to which each program applies. The Data Model was updated to include new information related to Customer Analytics, and a new section, Market Basket Analysis Module, was added to the document with its own entity relationship diagrams and data definitions. List of Documents The following documents are included in Oracle Retail Analytics 13.3: Oracle Retail Analytics Release Notes Oracle Retail Analytics Installation Guide Oracle Retail Analytics User Guide Oracle Retail Analytics Implementation Guide Oracle Retail Analytics Operations Guide Oracle Retail Analytics Data Model

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • Unable to Get IIS Response Times with Hyperic Monitoring Tool

    - by jwmajors81
    We have a very large .NET application (vendor) that I am trying to gather performance metrics for using Hyperic. In general I wanted to be able to report response times for each of the components within the application, which include: Web Services ASP.NET Pages MSMQ I am currently unable to successfully monitor the response times for IIS on my windows machines. I have successfully auto-discovered them and I am getting diagnostic information, but am not getting back the response times. After looking online at what other people are seeing, I found that I am missing a tab for the response times which is usually next to the metrics tab. Also, when I look at the configuration screen for IIS I do not see a field which enables me to specify where the log files are located at. Please note that my logs are located at e:\LogFiles\ instead of the usual location because our IT staff doesn't allocate much space for the C: drive. Please note that I have the MSMQ monitors up and running great. Any assistance would be greatly appreciated. Thanks, Jeremy

    Read the article

  • How to handle growing QA reporting requirements?

    - by Phillip Jackson
    Some Background: Our company is growing very quickly - in 3 years we've tripled in size and there are no signs of stopping any time soon. Our marketing department has expanded and our IT requirements have as well. When I first arrived everything was managed in Dreamweaver and Excel spreadsheets and we've worked hard to implement bug tracking, version control, continuous integration, and multi-stage deployment. It's been a long hard road, and now we need to get more organized. The Problem at Hand: Management would like to track, per-developer, who is generating the most issues at the QA stage (post unit testing, regression, and post-production issues specifically). This becomes a fine balance because many issues can't be reported granularly (e.g. per-url or per-"page") but yet that's how Management would like reporting to be broken down. Further, severity has to be taken into account. We have drafted standards for each of these areas specific to our environment. Developers don't want to be nicked for 100+ instances of an issue if it was a problem with an include or inheritance... I had a suggestion to "score" bugs based on severity... but nobody likes that. We can't enter issues for every individual module affected by a global issue. [UPDATED] The Actual Questions: How do medium sized businesses and code shops handle bug tracking, reporting, and providing useful metrics to management? What kinds of KPIs are better metrics for employee performance? What is the most common way to provide per-developer reporting as far as time-to-close, reopens, etc.? Do large enterprises ignore the efforts of the individuals and rather focus on the team? Some other questions: Is this too granular of reporting? Is this considered 'blame culture'? If you were the developer working in this environment, what would you define as a measureable goal for this year to track your progress, with the reward of achieving the goal a bonus?

    Read the article

  • What are benefit/drawbacks of classifying defects during a peer code review

    - by DXM
    About 3 months ago, our engineering group rolled out Review Board to be used for all peer code reviews. Today, I had a discussion with one of the people involved in that process and found out that we are already looking for a replacement (possibly something commercial) because of several missing features. One of the features that is apparently asked by many people is the ability to classify/categorize each code review comment (i.e. is it a style issue, coding convention, resource leak, logic error, crash... whatever). For those teams that regularly practice code review, is this categorization a common practice? Do you do it? have you done it in the past? Is it good/bad? On one hand, it gives the team some more metrics and possibly will indicate more specific areas where developers may potentially need to be trained in (at least that seems to be the argument). Are there other benefits? And on the other hand, and this is my concern, is that it will slow down code review process that much more. As a team lead, I've done a fairly large share of reviews, and I've always liked the ability, to highlight a chunk of code, hammer off a comment and move on as fast as possible. Although I haven't tried it personally, I have a feeling that expanding that combo box every time and scrolling/searching for the right category would feel like something is tripping you. Also if we start keeping metrics on this stuff, my other concern is that valuable code review meeting time will be spent on arguing whether something is a logic error or if it should be classified as a crash.

    Read the article

  • I need a server or service that reroutes DNS requests

    - by Relentim
    We have two external servers, Dev and Prod. We are a software house and in the code we have a subdomain metrics.company.com that points to Prod. Development is continuous and our internal and external developers and testers will need to switch from Dev to Prod and back again. It is not an option to have a different sub domain in the code during development and change this for production. The way we wish to switch between Dev and Prod is to use DNS. We need a public DNS server that behaves normally apart from routing metrics.company.com to Dev. The users will be able to swap their DNS back and forward to hit the different servers. What is the easiest way to do this? Is there a company that hosts this service or am I going to have to rent a server and set it up myself? Any help would be much appreciated.

    Read the article

  • Have you used nDepend?

    - by Nick Harrison
    Have you Used NDepend? I have often wanted to use it, but never spent the money on it.   I have developed many tools that try to do pieces of what NDepend does, but never with as much success as they reach. Put simply, it is a tool that will allow you to udnerstand and monitor the architecture of your software, and it does it in some pretty amazing ways. One of the most impressive features is something that they call Code Query Language.   It allows you to write queries very similar to SQL to track the performance of various software metrics and use this to identify areas that are out of compliance with your standards and architecture. For instance, once you have analyzed your project, you can write queries such as : SELECT METHODS WHERE IsPublic AND CouldBePrivate  You can also set up such queries to provide warnings if there are records returned.    You can incorporae this into your daily build and compare build against build. There are over 82 metrics included to allow you to view your code in a variety of angles. I have often advocated for a "Code Inventory" database to track the state of software and the ROI on software investments.    This tool alone will take you about 90% of the way there. If you are not using it yet,  I strongly recommend that you do!

    Read the article

  • Interpreting Munin graphs showing available entropy and MySQL slow queries in sync

    - by user64204
    We're experiencing performance issues on our website, and after reviewing our munin graphs, the only metrics we've found in sync are Available entropy and MySQL slow queries, with the latter influenced by our number of logged in users: Based on the wikipedia entropy page, my understanding is that entropy is the amount of randomness (here measured in bytes) that the system can use for various tasks, mainly cryptography and functions that require random input. Since the peaks in available entropy and MySQL slow queries are occurring in sync and at regular interval, that the number of MySQL slow queries is proportional to our number of Drupal users whereas the peaks in available entropy seem to be much more constant and less proportional to these 2 metrics, we're thinking available entropy is the reflect of a root cause which, combined with the traffic to our website, is causing those slow queries (and not the opposite, slow queries influencing the entropy). Accordingly: Q: What underlying problem do you think could cause regular peaks in available entropy that could have an influence on MySQL's ability to process queries?

    Read the article

  • Tools to monitor guest OS performance in vSphere

    - by Quick Joe Smith
    I am looking for some tool or way to retrieve performance data from guest VMs running under vSphere 4.1. I am currently interested in the 4 basic metrics: CPU(%), Memory(%), Disk availability(%) & Network utilisation(Kb/s). The issue I have is that all of vSphere's performance data is from a ESXi host perspective (active, shared, consumed, overhead, swapped etc.) which is far removed from the data from the VM's own perspective. For instance, I have a Windows server VM idling, using around 410MB (~25% of its allocated 2GB) as reported by Task Manager, and this is the value I'm after. vSphere's metrics seem unable to arrive at this figure by any reliable and repeatable means. Is anyone aware of tools that can obtain this kind of data? The simpler, the better.

    Read the article

  • Must developers understand the business domain or should the specification be sufficient?

    - by Jerome C.
    I work for a company for which the domain is really difficult to understand because it is high technology in electronics, but this is applicable to any software development in a complex domain. The application that I work on displays a lot of information, charts, and metrics which are difficult to understand without experience in the domain. The developer uses a specification to describe what the software must do, such as specifing that a particular chart must display this kind of metrics and this metric is the following arithmetic formula. This way, the developer doesn't really understand the business and what/why he is doing this task. This can be OK if specification is really detailled but when it isn't or when the author has forgotten a use case, this is quite hard for the developer to find a solution. At the other hand, training every developer to all the business aspects can be very long and difficult. Should we give more importance to detailled specification (but as we know, perfect specification does not exist) or should we train all the developers to understand the business domain? EDIT: keep in mind in your answer that the company could used external developpers.

    Read the article

  • Ganglia divide colors by rolles

    - by com
    Sorry for a silly question I am still newbie to Ganglia. In Ganglia I control few important metrics for mysql (seconds behind master and etc.). In addition I have few bunches of mysql servers (every bunch has it's own tasks, but all of the bunches should be tested for seconds behind master). I am interested if this possible to show all metrics on the one page with different colors to different bunches. Right now in metric "seconds behind master" I see all mysql servers with metric "seconds behind master" with colors to different states (red is critical, gray is ok). Can I set a color to a graph according to it's bunch? Thanks!

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • Monitoring File I/O on file Server.

    - by jenglee
    Is there a way to monitoring the file I/O on my file server. I want to gather some metrics on my current file system. I am running an old windows 2003 file server and I am planning on moving to a new file server running either windows server 2008 or 2012. I want to use these metrics and get a new file server that improve file I/O and access. Can some please advise me to what is the best way to monitor file access and get file I/O information so I can upgrade to a better file server.

    Read the article

  • What to look for in estimating a PowerBuilder Conversion Project?

    - by tekiegreg
    Hi there, I've been trying to do a spec for a PowerBuilder 9 to 11.5 migration of a relatively complex application. Granted PowerBuilder is not really my specialty I'm having issues trying to justify an estimate for this part of the project (and the PowerBuilder people I've been talking with have had some personal issues lately and are out of communication). These are some of the metrics that we have seen and can evaluate: -PBL Files -Main Windows -Data Windows -Functions (no we don't have the source available on this project) What metrics in particular are helpful and how long would any given "unit" such as a Data Window take?

    Read the article

  • Extracting data from multiple servers SQL 2005 SSIS

    - by Raj
    I have created an SSIS package to connect to multiple SQL servers, create a database, a table and a stored procedure. The package also creates a job and schedules it to run every 5 minutes. The requirement is to collect performance metrics. I am using an ado object variable to get the server names and all the above tasks are in a for each loop and everything works fine. Now the problem: I need to create a data flow task, which will connect to each of these servers in turn, copy the performance metrics data over to a central server and purge the source table. I am unable to get this task to work. This task fails with "Unable to obtain Connection" error. Any help will be greatly appreciated. SQL Server Version : 2005 Thanks, Raj

    Read the article

  • Key stroke time in Openmoko or any smart phones

    - by Adi
    Dear all, I am doing a project in which I am working on security issues related to smart phones. I want to develop an authentication scheme which is based on biometrics, Every human being have a unique key-hold time,digraph time error rate. Key-Hold Time : Time difference between pressing and releasing a key . Digraph Time : Time difference between releasing one and pressing next one. Error Rate : No of times backspace is pressed. I got these metrics from a paper "Keystroke-based User Identification on Smart Phones" by Saira Zahid1, Muhammad Shahzad1, Syed Ali Khayam1,2, Muddassar Farooq1. I was planning to get the datasets to test my algorithm from openmoko phone, but the phone is mis-behaving and I am finding trouble in generating these time data-sets. If anyone can help me or tell me a good source of data sets for the 3 metrics I defined, it will be a great help. Thanks Aditya

    Read the article

  • The system cannot find the path specified with FileWriter

    - by Nazgulled
    Hi, I have this code: private static void saveMetricsToCSV(String fileName, double[] metrics) { try { FileWriter fWriter = new FileWriter( System.getProperty("user.dir") + "\\output\\" + fileTimestamp + "_" + fileDBSize + "-" + fileName + ".csv" ); BufferedWriter csvFile = new BufferedWriter(fWriter); for(int i = 0; i < 4; i++) { for(int j = 0; j < 5; j++) { csvFile.write(String.format("%,10f;", metrics[i+j])); } csvFile.write(System.getProperty("line.separator")); } csvFile.close(); } catch(IOException e) { System.out.println(e.getMessage()); } } But I get this error: C:\Users\Nazgulled\Documents\Workspace\Só Amigos\output\1274715228419_5000-List-ImportDatabase.csv (The system cannot find the path specified) Any idea why? I'm using NetBeans on Windows 7 if it matters...

    Read the article

  • How to sort a Ruby Hash by number value?

    - by dustmoo
    Hi everyone, I have a counter hash that I am trying to sort by count. The problem I am running into is that the default Hash.sort function sorts numbers like strings rather than by number size. i.e. Given Hash: metrics = {"sitea.com" => 745, "siteb.com" => 9, "sitec.com" => 10 } Running this code: metrics.sort {|a1,a2| a2[1]<=>a1[1]} will return a sorted array: [ 'siteb.com', 9, 'sitea.com', 745, 'sitec.com', 10] Even though 745 is a larger number than 9, 9 will appear first in the list. When trying to show who has the top count, this is making my life difficult. :) Any ideas on how to sort a hash (or an array even) by number value size? I appreciate any help.

    Read the article

  • Determining which classes would benefit most from unit testing?

    - by benoit
    I am working on a project where we have only 13% of code coverage with our unit tests. I would like to come up with a plan to improve that but by focusing first on the areas where increasing coverage would bring the greatest value. This project is in C#, we're using VS 2008 and TFS 2008 and out unit tests are written using MSTest. What methodology should I use to determine which classes we should tackle first? Which metrics (code or usage) should I be looking at (and how can I get those metrics if this is not obvious)?

    Read the article

  • How can I clip strings in Java2D and add ... in the end?

    - by Jonas
    I'm trying to print Invoices in a Java Swing applications. I do that by extending Printable and implement the method public int print(Graphics g, PageFormat pf, int page). I would like to draw strings in columns, and when the string is to long I want to clip it and let it end with "...". How can I measure the string and clip it at the right position? Some of my code: Font headline = new Font("Times New Roman", Font.BOLD, 14); g2d.setFont(headline); FontMetrics metrics = g2d.getFontMetrics(headline); g2d.drawString(myString, 0, 20); I.e How can I limit myString to be max 120px? I could use metrics.stringWidth(myString), but I don't get the position where I have to clip the string. Expected results could be: A longer string that exc... A shorter string. Another long string, but OK

    Read the article

  • Calculating the pixel size of a string with Python

    - by Aristide
    I have a Python script which needs to calculate the exact size of arbitrary strings displayed in arbitrary fonts in order to generate simple diagrams. I can easily do it with Tkinter. The problem is the results seem to depend on the version of Python and/or the system. import Tkinter as tk import tkFont root = tk.Tk() times12 = tkFont.Font(family="times",size=12) print times12.metrics("linespace"), print times12.measure("Hello world") times24 = tkFont.Font(family="times",size=24) print times24.metrics("linespace"), print times24.measure("Hello world") Python 2.5 on Mac OS X gives the actual pixel measurements: 12 57 24 116 Python 2.6.1 on Mac OS X gives: 14 58 27 115 Python 2.6.3 on Windows XP gives: 19 71 36 154 Such a need being quite common, I suspect I did something wrong. Any idea?

    Read the article

  • UISegmentedControl is hidden under the titleBar

    - by shay te
    i guess i am missing something with the UISegmentedControl and auto layout. i have a TabbedApplication (UITabBarController), and i created a new UIViewController to act as tab. to the new view i added UISegmentedControl, and place it to top using auto layout. i guess i don't understand completely something , cause the UISegmentedControl is hiding under the titleBar . can u help me understand what i am missing ? thank you . import Foundation import UIKit; class ViewLikes:UIViewController { override func viewDidLoad() { super.viewDidLoad() title = "some title"; var segmentControl:UISegmentedControl = UISegmentedControl(items:["blash", "blah blah"]); segmentControl.selectedSegmentIndex = 1; segmentControl.setTranslatesAutoresizingMaskIntoConstraints(false) self.view.addSubview(segmentControl) //Set layout var viewsDict = Dictionary <String, UIView>() viewsDict["segment"] = segmentControl; //controls self.view.addConstraints(NSLayoutConstraint.constraintsWithVisualFormat("H:|-[segment]-|", options: NSLayoutFormatOptions.AlignAllCenterX, metrics: nil, views: viewsDict)) self.view.addConstraints(NSLayoutConstraint.constraintsWithVisualFormat("V:|-[segment]", options: NSLayoutFormatOptions(0), metrics: nil, views: viewsDict)) } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } }

    Read the article

  • Reporting / BI Framework for social websites

    - by Ryan
    I'm looking for ideas / open source frameworks to use for creating individual Analytics for user profiles and all the other profile types. Users will have different custom metrics, businesses willl have seperate metrics, the admin section will have seperate, Advertises will have seperate, etc. So basically the goal is to have 1 framework in place for all Analytics, which will be custom user to user and even use that for the system analytic needs also. It will include data analytics as there will be user ratings/reviews to perfomr data mining on for businesses, USers will have basic reporting on their needs (like friend demographics, filter by different preferences, etc). System is being developed in cakePhp. Thanks.

    Read the article

  • Samsung Galaxy Tab 1 layout problems

    - by William Rae
    I'm having trouble running my app on the Galaxy tab original 7". It appears to make everything 1.5 times bigger, i.e. if I specify 40dip for a textSize in my layout, it will display as 60 dip when I run it on the tablet. I tried messing around with the display metrics and changing the density and densityDpi to 1. (When I run a toString of the display metrics in the Galaxy tab 2, they are both 1, whereas the Galaxy tab 1 has values of 1.5) The app runs very well on every phone I've tested it on, and on the Galaxy tab 2, so I can't figure out what the problem is. I even tried creating a dummy app with just a textView with a size of 40dip, and it still converted it to 60. Any ideas? Thanks.

    Read the article

  • Get averages from pre-aggregated reports in mongodb

    - by Chris
    I've got a database with pre-aggregated metrics similar to the one outlined in this use case: http://docs.mongodb.org/manual/use-cases/pre-aggregated-reports/ I have a daily collection with a subdocument for hour and minute metrics, and a 'metadata.date' entry for midnight on the day it represents. I also have a monthly collection with a day subdocument for each day. If I want to get an average of a metric over the past eight or so days how can I do that with the aggregation framework? Is the aggregation framework not the right tool for this since it's already pre-aggregated?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >