Search Results

Search found 6001 results on 241 pages for 'requires'.

Page 72/241 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • Setting up Rails to work with sqlserver

    - by FortunateDuke
    Ok I followed the steps for setting up ruby and rails on my Vista machine and I am having a problem connecting to the database. Contents of database.yml development: adapter: sqlserver database: APPS_SETUP Host: WindowsVT06\SQLEXPRESS Username: se Password: paswd Run rake db:migrate from myapp directory ---------- rake aborted! no such file to load -- deprecated ADO I have dbi 0.4.0 installed and have created the ADO folder in C:\Ruby\lib\ruby\site_ruby\1.8\DBD\ADO I got the ado.rb from the dbi 0.2.2 What else should I be looking at to fix the issue connecting to the database? Please don't tell me to use MySql or Sqlite or Postgres. *UPDATE* I have installed the activerecord-sqlserver-adapter gem from --source=http://gems.rubyonrails.org Still not working. I have verified that I can connect to the database by logging into SQL Management Studio with the credentials. rake db:migrate --trace PS C:\Inetpub\wwwroot\myapp> rake db:migrate --trace (in C:/Inetpub/wwwroot/myapp) ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! no such file to load -- deprecated C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require' C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:355:in `new_constants_in' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require' C:/Ruby/lib/ruby/site_ruby/1.8/dbi.rb:48 C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require' C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:355:in `new_constants_in' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/core_ext/kernel/requires.rb:7:in `require_library_ or_gem' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/core_ext/kernel/reporting.rb:11:in `silence_warnin gs' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/core_ext/kernel/requires.rb:5:in `require_library_ or_gem' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-sqlserver-adapter-1.0.0.9250/lib/active_record/connection_adapters/sqlserver _adapter.rb:29:in `sqlserver_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/abstract/connection_specificatio n.rb:292:in `send' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/abstract/connection_specificatio n.rb:292:in `connection=' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/abstract/connection_specificatio n.rb:260:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/abstract/connection_specificatio n.rb:78:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/migration.rb:408:in `initialize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/migration.rb:373:in `new' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/migration.rb:373:in `up' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/migration.rb:356:in `migrate' C:/Ruby/lib/ruby/gems/1.8/gems/rails-2.1.1/lib/tasks/databases.rake:99 C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:621:in `call' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:621:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:616:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:616:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:582:in `invoke_with_call_chain' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:575:in `invoke_with_call_chain' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:568:in `invoke' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2031:in `invoke_task' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2009:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2009:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2009:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2048:in `standard_exception_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2003:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:1982:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2048:in `standard_exception_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:1979:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/bin/rake:31 C:/Ruby/bin/rake:19:in `load' C:/Ruby/bin/rake:19 PS C:\Inetpub\wwwroot\myapp>

    Read the article

  • Nginx compiled --with-http_spdy_module yet raise errors complains ngx_http_spdy_module

    - by c19
    [emerg] 21101#0: the "spdy" parameter requires ngx_http_spdy_module in /etc/nginx/conf.d/cc.conf isn't it the same module? and it causes multi-redirection error too. I have no idea what is going on. Full configure arg: nginx version: nginx/1.4.2 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-pcre --with-http_ssl_module `--with-http_spdy_module` --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --with-openssl=/usr/local/src/openssl-1.0.1e

    Read the article

  • How to log invalid client SSL certificate in SSL

    - by matra
    I have a IIS web site which requires client certificate. I have turned off CRL checking. The client is unable to access the web site - he gets 403.17 (certificate expired) error. I would like to log the certificate he is using, becaue I think he is using the wrong certificate. Is there a way to do this? I probably can not use WireShark, because client certificatethat is passed from the client is probably already encryped. I am running a WIndows 2003 server. Matra

    Read the article

  • Uninstall mysql completely windows 7

    - by cestmoimarin
    Greetings, I understand that this questions has been asked twice. However the answer is not there. I've removed all mysql in registry via regedit that I could find. I made programdata folder visible, deleted mysql folder that's there. Windows 7 doesn't have a very good 'grep' equivalent that I could use. I tried using powershell to find any hidden files but it requires digital signatures which I do not know how to create. Besides windows restore, is there any other way I can force my old 'invisible' mysql 5.1.40 to disappear? I want to try and install mysql via other ways, I'm not sure how to use cmake though to compile the code.

    Read the article

  • Problem with "Transfer-Encoding: chunked" in Apache 2.2

    - by Michal Niklas
    One of client of our web service uses axis2 application that sends HTTP 1.1 query with: Transfer-Encoding: chunked header. Such query is refused by our Apache 2.2 with message: <title>411 Length Required</title> </head><body> <h1>Length Required</h1> <p>A request of the requested method POST requires a valid Content-length.<br /> In Apache logs there is: [Mon May 17 09:06:04 2010] [error] [client 127.0.0.1] chunked Transfer-Encoding forbidden: /app/webservices/soap.hdb When I send such message without Transfer-Encoding: chunked and with Content-Length all works ok. I searched how to solve this problem, but I found only how to disable Transfer-Encoding: chunked on client side. Is there any way to do it on server side?

    Read the article

  • OpenGL/SharpGL - Points only on -near surface of Ortho projection?

    - by FTLPhysicsGuy
    When you create points using three dimensions for each point and you use an Ortho projection to view the points, would there be a reason that only the points on the -near surface would appear? For example, if you use (the SharpGL method) gl.Ortho(0, width, height, 0, -10, 10), only the points at z=10 (because the near surface is at -10) actually show up. I'm currently using SharpGL - but I'm hoping the issue I'm having isn't with that particular implementation/library. EDIT: I'm adding the code below that demonstrates the issue. Note that this example requires SharpGL and is in fact a modification of a WPF sample project that comes with the current SharpGL source code (the original sample project is called TwoDSample). The project requires a MainWindow.xaml and a MainWindow.xaml.cs. Here's the xaml: <Window x:Class="TwoDSample.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525" xmlns:my="clr-namespace:SharpGL.WPF;assembly=SharpGL.WPF"> <Grid> <my:OpenGLControl Name="openGLControl1" OpenGLDraw="openGLControl1_OpenGLDraw" OpenGLInitialized="openGLControl1_OpenGLInitialized" Resized="openGLControl1_Resized"/> </Grid> </Window> Here is the code behind: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using SharpGL.Enumerations; namespace TwoDSample { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } // NOTE: I use this to restrict the openGLControl1_OpenGLDraw method to // drawing only once after m_drawCount is set to zero; int m_drawCount = 0; private void openGLControl1_OpenGLDraw(object sender, SharpGL.SceneGraph.OpenGLEventArgs args) { // NOTE: Only draw once after m_drawCount is set to zero if (m_drawCount < 1) { // Get the OpenGL instance. var gl = args.OpenGL; gl.Color(1f, 0f, 0f); gl.PointSize(2.0f); // Draw 10000 random points. gl.Begin(BeginMode.Points); Random random = new Random(); for (int i = 0; i < 10000; i++) { double x = 10 + 400 * random.NextDouble(); double y = 10 + 400 * random.NextDouble(); double z = (double)random.Next(-10, 0); // Color the point according to z value gl.Color(0f, 0f, 1f); // default to blue if (z == -10) gl.Color(1f, 0f, 0f); // Red for z = -10 else if (z == -1) gl.Color(0f, 1f, 0f); // Green for z = -1 gl.Vertex(x, y, z); } gl.End(); m_drawCount++; } } private void openGLControl1_OpenGLInitialized(object sender, SharpGL.SceneGraph.OpenGLEventArgs args) { } private void openGLControl1_Resized(object sender, SharpGL.SceneGraph.OpenGLEventArgs args) { // NOTE: force the draw routine to happen again when resize occurs m_drawCount = 0; // Get the OpenGL instance. var gl = args.OpenGL; // Create an orthographic projection. gl.MatrixMode(MatrixMode.Projection); gl.LoadIdentity(); // NOTE: Basically no matter what I do, the only points I see are those at // the "near" surface (with z = -zNear)--in this case, I only see green points gl.Ortho(0, openGLControl1.ActualWidth, openGLControl1.ActualHeight, 0, 1, 10); // Back to the modelview. gl.MatrixMode(MatrixMode.Modelview); } } }

    Read the article

  • Best 'Remember the milk' client for Windows XP

    - by n0v1c3c0d3r
    I'm a user of RTM (Remember The Milk). Since I have Windows 7 at home, I'm using a Windows Sidebar gadget ('Forget the milk'). But as I'm using Win XP at office, I cannot use the gadget. I am looking for an RTM client for Windows XP. I have used a software running on Adobe AIR, which requires to go to the RTM site every time to add a job. Is there any other effective clients for XP which can at least: Add a task Delete a task without visiting the site every time.

    Read the article

  • How to escape or remove double quotes in rsyslog template

    - by Evgeny
    I want rsyslog to write log messages in JSON format, which requires to use double-quotes (") around strings. Problem is that values sometime include double-quotes themselves, and those need to be escaped - but I can't figure out how to do that. Currently my rsyslog.conf contains this format that I use (a bit simplified): $template JsonFormat,"{\"msg\":\"%msg%\",\"app-name\":\"%app-name%\"}\n",sql But when a msg arrives that contains double quotes, the JSON is broken, example: user pid=21214 uid=0 auid=4294967295 msg='PAM setcred: user="oracle" exe="/bin/su" (hostname=?, addr=?, terminal=? result=Success)' turns into: {"msg":"user pid=21214 uid=0 auid=4294967295 msg='PAM setcred: user="oracle" exe="/bin/su" (hostname=?, addr=?, terminal=? result=Success)'","app-name":"user"} but what I need it to become is: {"msg":"user pid=21214 uid=0 auid=4294967295 msg='PAM setcred: user=\"oracle\" exe=\"/bin/su\" (hostname=?, addr=?, terminal=? result=Success)'","app-name":"user"}

    Read the article

  • EC2 hosted service multi-tenant dynamic DNS solution

    - by accidental admin
    I want to change the model of my EC2 hosted service to have a separate sub domain for each tenant (ie. .example.com). My primary DNS is now with dnsmadeeasy.com, but their dynamic DNS offering seem pretty weak: it requires the API to use my full dnsmadeeasy.com account credentials, I rather have the API use a limited privilege credential that can only add/remove/modify subdomain records from what I gather it only allows to modify existing records, does not allow me to dynamically add/remove records for new tenant subdomains My question what are my alternatives? Is there something in the dnsmadeeasy API offering I misunderstood and I should just use them? Is there some other similar DNS service that has a DDNS offering that satisfies my requirements? Or should I just bite the bullet and host my own DNS (my fear is not configuration/learning/know how, my fear is reliability). If you recommend the latter, can you detail the necessary steps or a link to a good tutorial how to?

    Read the article

  • Google chrome asking for username and password for OWA

    - by Grant
    Hi, i have a question about the google chrome browser. When i navigate to my work's Outlook Web Access site to read my emails, the chrome browser is prompting me for a username and password to the server saying "Authentication Required - the server XXXXXX.XXX:443 requires a username and password. After i put them in i then have to enter in the normal OWA username and password to access my emails as per normal. The funny thing is.. 1] If i click CANCEL on the first dialog it takes me to the OWA screen and i can log in normal anyway. However - subqeuent page clicks will keep prompting me each time for the server credentials. 2] I am NOT prompted for server UN and PW if i use IE or fireFox. Does anyone know how to stop chrome from asking me each time? or is it a server setting - i do know that a friend who uses the same browser (chrome) and also OWA does not have the same problem (NB: they work at a different company) Thanks!

    Read the article

  • iChat Screen Sharing - Keyboard shortcut to switch back to my computer without messing with the mous

    - by Sergio Oliveira Jr.
    When you are watching a screen sharing session commanded by the other user, in other words, you are watching the other user doing stuff in his computer, how do I switch back to my computer without disrupting the other user by stealing the mouse to click on the "My Computer" window? Put simply: To go back to my computer I have to click in the "My computer" little window at the bottom, but that requires me to use the mouse which is being used by the other user. There must be a way to use a keyboard shortcut to perform that action without bothering the other user who is using the mouse to do something important. Anyone? Thanks, -Sergio

    Read the article

  • Windows 7 KSOD On Login

    - by Brandon Bertelsen
    For those that are unaware, KSOD means blacK Screen of Death. Essentially, when windows starts my computer shows only the cursor and a black screen. It seems like any and all shell elements are disabled (or perhaps not started). I have seen a number of these questions asked, none of which have matched my situation. CTRL + ALT + ... does not respond Restarting in safe mode, results in the same KSOD sfc /scannow seems to have no effect when typed at the command prompt that is accessed using the recovery tools via the install disk Update to item 3: sfc /scannow reports: There is a system repair pending which requires reboot to complete. Restart Windows and run sfc again. However, Windows does not restart past KSOD. Update to item 3 as per Soandos comment re: /offbootdir sfc /scannow /offbotdir=e:\ /windir=e:\windows "Windows resource protection found corrupt files but was unable to fix some of them. Details are included in the CBS.log..."

    Read the article

  • Trust my work domain on a Dev Domain without a domain level password

    - by Vaccano
    I setup a virtual machine to host a dev version of TFS (to test plugins on). Getting a computer on my work domain requires large amounts of red tape and paperwork that I would rather not do. I created my own domain the the VM and I would like to trust all users from my work domain on that VM Domain. But when I tried to setup the trust I needed a password from my work domain (which I don't have). Am I trying to do something nefarious? I just want to be able to authenticate to my Test TFS (VM) Server as me (my login on my work domain). Is there a way to do that with out having to have a domain level password for my work domain? (My VM is a Windows Server 2008 R2 server)

    Read the article

  • Rails3 environment running very slow on Windows XP, Ubuntu 9.04, Ubuntu 9.10

    - by bergyman
    I've tried all three (granted the Ubuntu versions were via VirtualBox with XP as a host, but I gave the images all the available RAM my system has). Loading the rails environment is taking 30-60 seconds. rails console, rake test:units - anything that requires rails to load up. And not just on the first go - every time. I've even used autotest to see if it helps with execution time for unit tests, but it doesn't. Any time I change one test, it still takes 30 seconds to load them, and then about 4 seconds to execute. Has anyone else come across this issue? Has anyone figured out any way to fix this?

    Read the article

  • upgrade glibc on RHEL4 without breaking anything

    - by SpliFF
    I have a static version of wkhtmltopdf which requires glibc-2.4 wkhtmltopdf: /lib/tls/libc.so.6: version `GLIBC_2.4' not found (required by wkhtmltopdf) I have apt installed with the DAG repos. Other than that the server is pretty stock standard except for Coldfusion MX7. My question is, is it safe to just "apt update glibc"? Will the updated glibc clobber the old one or will they co-exist? Should I "apt upgrade" the whole server? I'm pretty sure everything else (Apache2, Postgres8, etc) will handle the upgrade but Coldfusion concerns me due to its proprietry nature.

    Read the article

  • Got black screen when recording screen from xvfb by ffmpeg x11grab device

    - by shawnzhu
    I'm trying to record video from a firefox run by xvfb-run but it always output nothing in the video file except black screen. Here's what I did: start a firefox, open google.com: $ xvfb-run firefox https://google.com Then it will use the default display server number 99. I can see the display information by command xdpyinfo -display :99. A screenshot works very well by command: $ xwd -root -silent -display :99.0 | xwdtopnm |pnmtojpeg > screen.jpg Start using ffmpeg to record a video: $ ffmpeg -f x11grab -i :99.0 out.mpg When I play the video file out.mpg, there's black screen all the time. Is there any parameter I missed? Updates I made progress that the video works instead of black screen only by this command: $ ffmpeg -y -r 30 -g 300 -f x11grab -s 1024x768 -i :99 -vcodec qtrle out.mov Notice it requires the screen resolution matches by specify more options to xvfb-run: $ xvfb-run -s "-screen 0 1224x768x16" -a firefox http://google.com But I still want to get more feedbacks and answers here.

    Read the article

  • Windows 8 asks DotNet 3.5 to install DotNet 3.5

    - by William.Ebe
    While trying to install some tools in Windows 8, it showed me a nice message saying it requires DotNet framework. Well, it's fine. But the bad part is I have offline installers for v2.0 and v3.5. While I'm trying to install those installers it shows me the same message as it shows above. I can't consider the web installer, because I don't want to download around 100MB. The file I have works fine for other versions like Windows 7 or XP. Any fixes?

    Read the article

  • "Recursive" Wildcard DNS for Amazon Route 53

    - by Brian
    The title of the question might be misleading because I'm not an expert and I'm trying to learn the proper terminology. That being said, I'm wondering if it's possible to setup a wildcard DNS record for any number of dot separation in domain names and have them all point to the network root. This is for WordPress multisite, users will have the option of choosing a mapped domain name, I want to configure my DNS so that both mysite.co.uk.network.com and mysite.com.network.com are valid (I realize this is a but ugly, but WordPress multisite requires that each site have a unique site_url and I'd prefer to preserve the period-delimited appearance if it's possible).

    Read the article

  • How to configure basic authentication in Apache httpd virtual hosts?

    - by Jader Dias
    I'm trying to configure mercurial access using Apache http. It requires authentication. My /etc/apache2/sites-enabled/mercurial looks like this: NameVirtualHost *:8080 <VirtualHost *:8080> UseCanonicalName Off ServerAdmin webmaster@localhost AddHandler cgi-script .cgi ScriptAliasMatch ^(.*) /usr/lib/cgi-bin/hgwebdir.cgi/$1 </VirtualHost> Every tutorial I read on the internet tells me to insert these lines: AuthType Basic AuthUserFile /usr/local/etc/httpd/users But when I do it I get the following error: # /etc/init.d/apache2 reload Syntax error on line 8 of /etc/apache2/sites-enabled/mercurial: AuthType not allowed here My distro is a customized Ubuntu called Turnkey Linux Redmine

    Read the article

  • Can you authenticate into SSAS with AD LDS (ADAM) accounts?

    - by Jaxidian
    I'm very new to AD LDS and experienced but not qualified with SSAS, so my apologies for my ignorances with these. We have a couple implementations where we expose SSAS via an HTTPS proxy (msmdpump.dll) and currently we have a temporary domain setup handling this (where our end-users have a second account+creds to manage because of this = non-ideal). I want to move us towards a more permanent solution which I'm thinking of moving all authentication to AD LDS for our web apps, SSAS, and others. However, SSAS is where I'm concerned about this. I know SSAS requires Windows Authentication and to play nicely, and that this ultimately means Active Directory will be involved. Is there a way to get this done with AD LDS instead of having to use a full AD DS implementation? If so, how? (Note: My question over at StackOverflow had a suggestion that I post this question here on ServerFault instead. My apologies if I'm not asking in the right forum.)

    Read the article

  • Adding MySQL servers/ data nodes into database clustering without restarting mysql cluster

    - by Dwayne Johnson
    I currently have mysql clustering up and running. For high scalability is there a way to include either mysql node, data nodes, or management nodes without restarting the entire cluster. I wish to understand how is it implement or is there a documentation I can read. I believe only the latest version can support this. I am running NDB 7.0. I am aware that I am able to add the nodes online, but it requires me perform a rolling restart. What other approach I can take to implement this without restarting in my network?

    Read the article

  • How to remove buttons in AnythingSlider and stop the sliding?

    - by user244394
    I'm currently using the anythingSlider it works quite well. But if there is one li, how do i make it stop sliding and remove the buttons displayed below? The *li*s are generated from the database so sometimes there's only one. I want the buttons to show only when there is more then one image. if there is one image all the buttons( back, forward, pause) should not be displayed. Does anybody know of a way of stopping it sliding if there's only one li and removing the buttons when there is only one image. ===================================================================================== Thank You. Currently I have the working version, posted below the old code is working. I tried to replace it with yours it didnt seem to work. Is there something else that needs to be added? function formatText(index, panel) { return index + ""; } $(function () { $('.anythingSlider').anythingSlider({ easing: "easeInOutExpo", // Anything other than "linear" or "swing" requires the easing plugin autoPlay: true, // This turns off the entire FUNCTIONALY, not just if it starts running or not. delay: 7500, // How long between slide transitions in AutoPlay mode startStopped: false, // If autoPlay is on, this can force it to start stopped animationTime: 1250, // How long the slide transition takes hashTags: true, // Should links change the hashtag in the URL? buildNavigation: true, // If true, builds and list of anchor links to link to each slide pauseOnHover: true, // If true, and autoPlay is enabled, the show will pause on hover startText: "", // Start text stopText: "", // Start text navigationFormatter: formatText // Details at the top of the file on this use (advanced use) }); $("#slide-jump").click(function(){ $('.anythingSlider').anythingSlider(6); }); }); UPDATED WITH YOUR CODE : function formatText(index, panel) { return index + ""; } $(function () { var singleSlide = true, options = { autoPlay: false, // This turns off the entire FUNCTIONALY, not just if it starts running or not. buildNavigation: false, // If true, builds and list of anchor links to link to each slide easing: "easeInOutExpo", // Anything other than "linear" or "swing" requires the easing plugin delay: 3000, // How long between slide transitions in AutoPlay mode animationTime: 600, // How long the slide transition takes hashTags: true, // Should links change the hashtag in the URL? pauseOnHover: true, // If true, and autoPlay is enabled, the show will pause on hover navigationFormatter: formatText // Details at the top of the file on this use (advanced use) }; // Add player options if more than one slide exists if ( $('.anythingSlider div ul li').length 1 ) { $.extend(options, { autoPlay: true, startStopped: false, // If autoPlay is on, this can force it to start stopped startText: "", // Start text stopText: "", // Start text buildNavigation: true }); singleSlide = false; } // Initiate anythingSlider $("#slide-jump").click(function(){ $('.anythingSlider').anythingSlider(6); }); // hide anythingSlider navigation arrows if (singleSlide) { $('.anythingSlider a.arrow').hide(); } }); HTML TAGS ========================================================================================= Update May 25 ,2010 When using $('.anythingSlider').anythingSlider(options); instead of $('.anythingSlider').anythingSlider(6); the slider runs but I noticed that I get a Javascript Error :Object required is there anything else I need to pass? Since before anythingSlider was taking 6 instead of options, where do I pass that 6, since its looking for it.

    Read the article

  • While running a batch file in Windows 7 with Admin rights from a thumb drive, how can I get the file path back to the thumb drive?

    - by Jeremy DeStefano
    I have a piece of software that is being distributed to several departments for installation onto Windows 7 laptops. They install software from the thumb drive and then they have to run a script to properly configure the software. Because the script is changing registry files and program files, it requires Admin rights. When running as Admin, it drops into the System32 folder and I no longer have an easy scriptable way to access files that need to be copied from the thumb drive, simply because I don't know for sure what drive letter its going to use on the various machines. Previous installations were on Windows XP and the command window file path stayed within the script folder. I've found similar questions here and I have already tried Relative Paths, but it can't seem to find the proper folder on the thumb drive or I can't seem to find the proper way to format it.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >