Search Results

Search found 21212 results on 849 pages for 'apt key'.

Page 556/849 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • Having XP VM use my host OSX ssh tunnel to connect to a remote site?

    - by Manachi
    I am using Mac OSX and have Windows XP running on VMWare Fusion. I'm creating an ssh tunnel from OSX to a remote server, and then trying to have Windows XP use that tunnel (I actually use a program called Proxifier on XP to filter my XP MS SQL Server traffic through that tunnel) Note that I can successfully create an ssh tunnel (on port 9333) from the XP putty to the remote host, and have SQL Server Proxify through that tunnel and it all works correctly. However when I try to set up the tunnel in OSX, and have Proxifier in XP point to the OSX tunnel instead of localhost, it doesn't seem to connect. Here is the OSX command i'm using to create the tunnel: ssh -i /my/key -p 9001 -D 9333 -g me@remotehostname Then I set my XP proxifier to point to macosxhostname:9333 (instead of the previous localhost:9333 which worked corrently when using putty) Any suggestions on what I may have missed? My XP firewall is turned off while setting this up.

    Read the article

  • Webcast Series Part I: The Shifting of Healthcare’s Infrastructure Strategy – A lesson in how we got here

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Register today for the first part of a three-part webcast series and discover the changing strategy of healthcare capital planning and construction. Learn how Project Portfolio Management solutions are the key to financial discipline, increased operation efficiency and risk mitigation in this changing environment. Register here for the first webcast on Thursday, November 1, 2012 10:00am PT/ 1:00 p.m ET In this engaging and informative Webcast, Garrett Harley, Sr. Industry Strategist, Oracle Primavera and Thomas Koulouris, Director, PricewaterhouseCoopers will explore: Evolution of the healthcare delivery system Drivers & challenges facing the current healthcare infrastructure Importance of communication and integration between Providers and Contractors to their bottom lines View the evite for more details.

    Read the article

  • An Overview of Batch Processing in Java EE 7

    - by Janice J. Heiss
    Up on otn/java is a new article by Oracle senior software engineer Mahesh Kannan, titled “An Overview of Batch Processing in Java EE 7.0,” which explains the new batch processing capabilities provided by JSR 352 in Java EE 7. Kannan explains that “Batch processing is used in many industries for tasks ranging from payroll processing; statement generation; end-of-day jobs such as interest calculation and ETL (extract, load, and transform) in a data warehouse; and many more. Typically, batch processing is bulk-oriented, non-interactive, and long running—and might be data- or computation-intensive. Batch jobs can be run on schedule or initiated on demand. Also, since batch jobs are typically long-running jobs, check-pointing and restarting are common features found in batch jobs.” JSR 352 defines the programming model for batch applications plus a runtime to run and manage batch jobs. The article covers feature highlights, selected APIs, the structure of Job Scheduling Language, and explains some of the key functions of JSR 352 using a simple payroll processing application. The article also describes how developers can run batch applications using GlassFish Server Open Source Edition 4.0. Kannan summarizes the article as follows: “In this article, we saw how to write, package, and run simple batch applications that use chunk-style steps. We also saw how the checkpoint feature of the batch runtime allows for the easy restart of failed batch jobs. Yet, we have barely scratched the surface of JSR 352. With the full set of Java EE components and features at your disposal, including servlets, EJB beans, CDI beans, EJB automatic timers, and so on, feature-rich batch applications can be written fairly easily.” Check out the article here.

    Read the article

  • Ubuntu Remix on Acer Aspire One Netbook keeps freezing

    - by user7553
    It came with a Windows XP installation and I just couldn't deal with the screen constantly blacking out. Later on I found out they designed a specific key on this netbook for it. Anyways I installed Ubuntu Remix and it fixed it. However a lot of bad things are happening now. 1) If I want a long video or lots of music it will lockup after an hour. 2) Plugging it to an external monitor for more then 1 hour will black out. 3) Constantly running fsck after a lockup and then I lose nodes / fragments and what not :( I am really not sure what to do, both blacking out and lockups really mess my hard drive up. I have to manually run fsck every time it happens. My gf has the same kind of netboot w/ Remix (not the same model number) and it runs perfectly fine on hers. Except for skype eating up all her CPU.

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • A WPF Image Button

    - by psheriff
    Instead of a normal button with words, sometimes you want a button that is just graphical. Yes, you can put an Image control in the Content of a normal Button control, but you still have the button outline, and trying to change the style can be rather difficult. Instead I like creating a user control that simulates a button, but just accepts an image. Figure 1 shows an example of three of these custom user controls to represent minimize, maximize and close buttons for a borderless window. Notice the highlighted image button has a gray rectangle around it. You will learn how to highlight using the VisualStateManager in this blog post.Figure 1: Creating a custom user control for things like image buttons gives you complete control over the look and feel.I would suggest you read my previous blog post on creating a custom Button user control as that is a good primer for what I am going to expand upon in this blog post. You can find this blog post at http://weblogs.asp.net/psheriff/archive/2012/08/10/create-your-own-wpf-button-user-controls.aspx.The User ControlThe XAML for this image button user control contains just a few controls, plus a Visual State Manager. The basic outline of the user control is shown below:<Border Grid.Row="0"        Name="borMain"        Style="{StaticResource pdsaButtonImageBorderStyle}"        MouseEnter="borMain_MouseEnter"        MouseLeave="borMain_MouseLeave"        MouseLeftButtonDown="borMain_MouseLeftButtonDown">  <VisualStateManager.VisualStateGroups>  ... MORE XAML HERE ...  </VisualStateManager.VisualStateGroups>  <Image Style="{StaticResource pdsaButtonImageImageStyle}"         Visibility="{Binding Path=Visibility}"         Source="{Binding Path=ImageUri}"         ToolTip="{Binding Path=ToolTip}" /></Border>There is a Border control named borMain and a single Image control in this user control. That is all that is needed to display the buttons shown in Figure 1. The definition for this user control is in a DLL named PDSA.WPF. The Style definitions for both the Border and the Image controls are contained in a resource dictionary names PDSAButtonStyles.xaml. Using a resource dictionary allows you to create a few different resource dictionaries, each with a different theme for the buttons.The Visual State ManagerTo display the highlight around the button as your mouse moves over the control, you will need to add a Visual State Manager group. Two different states are needed; MouseEnter and MouseLeave. In the MouseEnter you create a ColorAnimation to modify the BorderBrush color of the Border control. You specify the color to animate as “DarkGray”. You set the duration to less than a second. The TargetName of this storyboard is the name of the Border control “borMain” and since we are specifying a single color, you need to set the TargetProperty to “BorderBrush.Color”. You do not need any storyboard for the MouseLeave state. Leaving this VisualState empty tells the Visual State Manager to put everything back the way it was before the MouseEnter event.<VisualStateManager.VisualStateGroups>  <VisualStateGroup Name="MouseStates">    <VisualState Name="MouseEnter">      <Storyboard>        <ColorAnimation             To="DarkGray"            Duration="0:0:00.1"            Storyboard.TargetName="borMain"            Storyboard.TargetProperty="BorderBrush.Color" />      </Storyboard>    </VisualState>    <VisualState Name="MouseLeave" />  </VisualStateGroup></VisualStateManager.VisualStateGroups>Writing the Mouse EventsTo trigger the Visual State Manager to run its storyboard in response to the specified event, you need to respond to the MouseEnter event on the Border control. In the code behind for this event call the GoToElementState() method of the VisualStateManager class exposed by the user control. To this method you will pass in the target element (“borMain”) and the state (“MouseEnter”). The VisualStateManager will then run the storyboard contained within the defined state in the XAML.private void borMain_MouseEnter(object sender,  MouseEventArgs e){  VisualStateManager.GoToElementState(borMain,    "MouseEnter", true);}You also need to respond to the MouseLeave event. In this event you call the VisualStateManager as well, but specify “MouseLeave” as the state to go to.private void borMain_MouseLeave(object sender, MouseEventArgs e){  VisualStateManager.GoToElementState(borMain,     "MouseLeave", true);}The Resource DictionaryBelow is the definition of the PDSAButtonStyles.xaml resource dictionary file contained in the PDSA.WPF DLL. This dictionary can be used as the default look and feel for any image button control you add to a window. <ResourceDictionary  ... >  <!-- ************************* -->  <!-- ** Image Button Styles ** -->  <!-- ************************* -->  <!-- Image/Text Button Border -->  <Style TargetType="Border"         x:Key="pdsaButtonImageBorderStyle">    <Setter Property="Margin"            Value="4" />    <Setter Property="Padding"            Value="2" />    <Setter Property="BorderBrush"            Value="Transparent" />    <Setter Property="BorderThickness"            Value="1" />    <Setter Property="VerticalAlignment"            Value="Top" />    <Setter Property="HorizontalAlignment"            Value="Left" />    <Setter Property="Background"            Value="Transparent" />  </Style>  <!-- Image Button -->  <Style TargetType="Image"         x:Key="pdsaButtonImageImageStyle">    <Setter Property="Width"            Value="40" />    <Setter Property="Margin"            Value="6" />    <Setter Property="VerticalAlignment"            Value="Top" />    <Setter Property="HorizontalAlignment"            Value="Left" />  </Style></ResourceDictionary>Using the Button ControlOnce you make a reference to the PDSA.WPF DLL from your WPF application you will see the “PDSAucButtonImage” control appear in your Toolbox. Drag and drop the button onto a Window or User Control in your application. I have not referenced the PDSAButtonStyles.xaml file within the control itself so you do need to add a reference to this resource dictionary somewhere in your application such as in the App.xaml.<Application.Resources>  <ResourceDictionary>    <ResourceDictionary.MergedDictionaries>      <ResourceDictionary         Source="/PDSA.WPF;component/PDSAButtonStyles.xaml" />    </ResourceDictionary.MergedDictionaries>  </ResourceDictionary></Application.Resources>This will give your buttons a default look and feel unless you override that dictionary on a specific Window or User Control or on an individual button. After you have given a global style to your application and you drag your image button onto a window, the following will appear in your XAML window.<my:PDSAucButtonImage ... />There will be some other attributes set on the above XAML, but you simply need to set the x:Name, the ToolTip and ImageUri properties. You will also want to respond to the Click event procedure in order to associate an action with clicking on this button. In the sample code you download for this blog post you will find the declaration of the Minimize button to be the following:<my:PDSAucButtonImage       x:Name="btnMinimize"       Click="btnMinimize_Click"       ToolTip="Minimize Application"       ImageUri="/PDSA.WPF;component/Images/Minus.png" />The ImageUri property is a dependency property in the PDSAucButtonImage user control. The x:Name and the ToolTip we get for free. You have to create the Click event procedure yourself. This is also created in the PDSAucButtonImage user control as follows:private void borMain_MouseLeftButtonDown(object sender,  MouseButtonEventArgs e){  RaiseClick(e);}public delegate void ClickEventHandler(object sender,  RoutedEventArgs e);public event ClickEventHandler Click;protected void RaiseClick(RoutedEventArgs e){  if (null != Click)    Click(this, e);}Since a Border control does not have a Click event you will create one by using the MouseLeftButtonDown on the border to fire an event you create called “Click”.SummaryCreating your own image button control can be done in a variety of ways. In this blog post I showed you how to create a custom user control and simulate a button using a Border and Image control. With just a little bit of code to respond to the MouseLeftButtonDown event on the border you can raise your own Click event. Dependency properties, such as ImageUri, allow you to set attributes on your custom user control. Feel free to expand on this button by adding additional dependency properties, change the resource dictionary, and even the animation to make this button look and act like you want.NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “A WPF Image  Button” from the drop down list.

    Read the article

  • MySQL 5.1.49 freezing every two days

    - by maximus
    Hi all, our mysql system is "freezing" every two days. By "freezing" i mean the following: it doesn't respond to ping we can't login with SSH we don't get any answer from MySQL there is no entry in the error logs! neither from linux neither from MySQL. we have already changed to a completely new hardware, we have the same problem, so it's definitely not a hardware problem. we do not have any other software installed except a firewall (iptables rule) we can restart the server from another server using rsyslog (www.rsyslog.com)(software reset) Could someone help me, by giving me some pointers what could i do to figure out the problem? I have included every detail about our settings. Thank you in advance for your help. Max. Our system parameters and settings: System-Memory: 12GB Processor: Intel 7-920 Quadcore Operating system: Debian 5 (lenny) 64bit MySQL 5.1.49 Databases: (a) a small phpbb forum (b) a 6GB database 3 tables with about 15 million rows my.cnf # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = our-ip-address # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 256K thread_cache_size = 32 max_connections = 300 table_cache = 2048 #thread_concurrency = 4 # Used for InnoDB tables recommended to 50%-80% available memory innodb_buffer_pool_size = 6G # 20MB sometimes larger innodb_additional_mem_pool_size = 20M # 8M-16M is good for most situations innodb_log_buffer_size = 8M # Disable XA support because we do not use it innodb-support-xa = 0 # 1 is default wich is 100% secure but 2 offers better performance innodb_flush_log_at_trx_commit = 1 innodb_flush_method = O_DIRECT #innodb_thread_concurency = 8 # Recommended 64M - 512M depending on server size innodb_log_file_size = 512M # One file per table innodb_file_per_table # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M #query_cache_type = 1 #query_cache_min_res_unit= 2K #join_buffer_size = 1M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 log_bin = /var/log/mysql/mysql-bin.log # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian! expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # * InnoDB plugin # As of MySQL 5.1.38, the InnoDB plugin from Oracle is included in the MySQL source code. # It has many improvements and better performances than the built-in InnoDB storage engine. # Please read http://www.innodb.com/products/innodb_plugin/ for more information. # Uncommenting the two following lines to use the InnoDB plugin. ignore_builtin_innodb plugin-load=innodb=ha_innodb_plugin.so # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # !includedir /etc/mysql/conf.d/ UPDATE After installing sysstat and configuring it to collect data after every minute i have the following datas. I used sar to generate the following output: The log-file is too big so coudn't enter it here but uploaded to box.net. The link is http://www.box.net/shared/xc6rh7qqob SECOND UPDATE We started a ping command in the background, and that solved the problem. Now the server does work since more then a week. We still don't know what's the problem.

    Read the article

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • unable to get local issuer certificate - Ubuntu 11.04

    - by user1443867
    I'm facing a strange issue. My vps from Linode has no issue connecting to apple push server with following command. openssl s_client -connect gateway.sandbox.push.apple.com:2195 -cert Test_dev_apns_cert.pem -key Test_dev_apns_key.pem However, I was using the same pem files with above command testing from my another low budget vps and I'm getting this: Verify return code: 20 (unable to get local issuer certificate) Both are running Ubuntu 11.04 and installed LAMP as usual. No special configuration is done to both servers for SSL. Am I missing something here?

    Read the article

  • Configuring Nginx SSL alongside non-ssl

    - by user55145
    I'm trying to enable SSL on my current Nginx configuration, which works fine. However I'm wondering if it's possible to do this alongside HTTP, so that i do not need another server{} section which would just be a replication of the http section. I thought the following would work, however i get the below when accessing http:// 400 Bad Request The plain HTTP request was sent to HTTPS port Nginx Config: ssl_certificate /etc/nginx/ssl/domains.pem; ssl_certificate_key /etc/nginx/ssl/server.key; server { listen 80; listen 443; //other configuration }

    Read the article

  • Make 'volume up' from 'mute' go to volume step one not previous volume on Macbook Pro

    - by Bill Cheatham
    Currently, on my Macbook Pro (OSX 10.6.5), I will often use the 'mute' button to turn the volume off, for example if I'm working at night. I may then watch a video or something, and decide to have the volume on very quietly. So I press the 'volume up' key—but instead of going to volume level one (the logical 'volume step up' from mute), the volume jumps to whatever mega-decibels I was blasting out before I muted in the first place. Is there a way to change the behavior of this? Ideally the 'mute' button would be changed to actually turn the volume to zero, rather than just muting. (And re-pressing mute would turn the volume back up to the original). Thanks!

    Read the article

  • Customize Your WordPress Blog & Build an Audience

    - by Matthew Guay
    Want to quickly give your blog a fresh coat of paint and make it stand out from the pack?  Here’s how you can customize your WordPress blog and make it uniquely yours. WordPress offers many features that help you make your blog the best it can be.  Although it doesn’t offer as many customization features as full WordPress running on your own server, it still makes it easy to make your free blog as professional or cute as you like.  Here we’ll look at how you can customize features in your blog and build an audience. Personalize Your Blog WordPress make it easy to personalize your blog.  Most of the personalization options are available under the Appearance menu on the left.  Here we’ll look at how you can use most of these. Add New Theme WordPress is popular for the wide range of themes available for it.  While you cannot upload your own theme to your blog, you can choose from over 90 free themes currently available with more added all the time.  To change your theme, select the Themes page under Appearance. The Themes page will show random themes, but you can choose to view them in alphabetical order, by popularity, or how recently they were added.  Or, you can search for a theme by name or features. One neat way to find a theme that suites your needs is the Feature Filter.  Click the link on the right of the search button, and then select the options you want to make sure your theme has.  Click Apply Filters and WordPress will streamline your choices to themes that contain these features. Once you find a theme you like, click Preview under its name to see how your blog will look. This will open a popup that shows your blog with the new theme.  Click the Activate link in the top right corner of the popup if you want to keep this theme; otherwise, click the x in the top left corner to close the preview and continue your search for one you want.   Edit Current Theme Many of the themes on WordPress have customization options so you can make your blog stand out from others using the same theme.  The default theme Twenty Ten lets you customize both the header and background image, and many themes have similar options. To choose a new header image, select the Header page under Appearance.  Select one of the pre-installed images and click Save Changes, or upload your own image. If you upload an image larger than the size for the header, WordPress will let you crop it directly in the web interface.  Click Crop Header when you’ve selected the portion you want for the header of your blog. You can also customize your blog’s background from the Background page under Appearance.  You can upload an image for the background, or can enter a hex value of a color for a solid background.  If you’d rather visually choose a color, click Select a Color to open a color wheel that makes it easy to choose a nice color.  Click Save Changes when you’re done. Note: that all themes may not contain these customization options, but many are flexible.  You cannot edit the actual CSS of your theme on free WordPress blogs, but you you can purchase the Custom CSS Upgrade for $14.97/year to add this ability. Add Widgets With Extra Content Widgets are small addons for your blog, similar to Desktop Gadgets in Windows 7 or Dashboard widgets in Mac OS X.  You can add widgets to your blog to show recent Tweets, favorite Flickr pictures, popular articles, and more.  To add widgets to your blog, open the Widgets page under Appearance. You’ll see a variety of widgets available in the main white box.  Select one you want to add, and drag it to the widget area of your choice.  Different themes may offer different areas to place Widgets, such as the sidebar or footer. Most of the widgets offer configuration options.  Click the down arrow beside its name to edit it.  Set them up as you wish, and click Save on the bottom of the widget. Now we’ve got some nice dynamic content on our blog that’s automatically updated from the net. Choose Blog Extras By default, WordPress shows previews of websites when visitors hover over links on your blog, uses a special mobile theme when people visit from a mobile device, and shows related links to other blogs on the WordPress network at the end of your posts.  If you don’t like these features, you can disable them on the Extras page under Appearance. Build Your Audience Now that your blog is looking nice, we can make sure others will discover it.  WordPress makes it easy for you to make your site discoverable on search engines or social network, and even gives you the option to keep your site private if you’d prefer.  Open the Privacy page under Tools to change your site’s visibility.  By default, it will be indexed by search engines and be viewable to everyone.  You can also choose to leave your blog public but block search engines, or you can make it fully private. If you choose to make your blog private, you can enter up to 35 usernames of people you want to be able to see it.  Each private visitor must have a WordPress.com account so they can login.  If you need more than 35 private members, you can upgrade to allow unlimited private members for $29.97/year. Then, if you do want your site visible from search engines, one of the best ways to make sure your content is discovered by search engines is to register with their webmaster tools.  Once registered, you need to add your key to your site so the search engine will find and index it.  On the bottom of the Tools page, WordPress lets you enter your key from Google, Bing, and Yahoo! to make sure your site is discovered.  If you haven’t signed up with these tools yet, you can signup via the links on this page as well. Post Blog Updates to Social Networks Many people discover the sites they visit from friends and others via social networks.  WordPress makes it easy to automatically share links to your content on popular social networks.  To activate this feature, open the My Blogs page under Dashboard. Now, select the services you want to activate under the Publicize section.  This will automatically update Yahoo!, Twitter, and/or Facebook every time you publish a new post. You’ll have to authorize your connection with the social network.  With Twitter and Yahoo!, you can authorize them with only two clicks, but integrating with Facebook will take several steps.   If you’d rather share links yourself on social networks, you can get shortened URLs to your posts.  When you write a new post or edit an existing one, click the Get Shortlink button located underneath the post’s title. This will give you a small URL, usually 20 characters or less, that you can use to post on social networks such as Twitter.   This should help build your traffic, and if you want to see how many people are checking out your site, check out the stats on your Dashboard.  This shows a graph of how many people are visiting, and popular posts.  Click View All if you’d like more detailed stats including search engine terms that lead people to your blog. Conclusion Whether you’re looking to make a private blog for your group or publish a blog that’s read by millions around the world, WordPress is a great way to do it for free.  And with all of the personalization options, you can make your it memorable and exciting for your visitors. If you don’t have a blog, you can always signup for a free one from WordPress.com.  Also make sure to check out our article on how to Start Your Own Blog with WordPress. Similar Articles Productive Geek Tips Manage Your WordPress Blog Comments from Your Windows DesktopAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogHow-To Geek SoftwareMake a Backup Copy of your Production Wordpress Blog on UbuntuOops! Sorry About the Feed Errors TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • Removing port forwardings programmatically on a ControlMaster SSH session

    - by aef
    Quite a while ago I got an answer telling me how to add a port-forwarding on a running SSH ControlMaster process. To know that helps a lot, but I'm still missing a way to remove such a port forwarding after I don't need that anymore. As far as I know, you can do that through the internal command key sequence on normal connections, this seems to be disabled for ControlMaster clients. Even if that would be possible I would need a solution which I can automatize with scripts, which is surely not so easy this way. Is there a way to do it? And is it easily automatizable?

    Read the article

  • NEW 2-Day Instructor Led Course on Oracle Data Mining Now Available!

    - by chberger
    A NEW 2-Day Instructor Led Course on Oracle Data Mining has been developed for customers and anyone wanting to learn more about data mining, predictive analytics and knowledge discovery inside the Oracle Database.  Course Objectives: Explain basic data mining concepts and describe the benefits of predictive analysis Understand primary data mining tasks, and describe the key steps of a data mining process Use the Oracle Data Miner to build,evaluate, and apply multiple data mining models Use Oracle Data Mining's predictions and insights to address many kinds of business problems, including: Predict individual behavior, Predict values, Find co-occurring events Learn how to deploy data mining results for real-time access by end-users Five reasons why you should attend this 2 day Oracle Data Mining Oracle University course. With Oracle Data Mining, a component of the Oracle Advanced Analytics Option, you will learn to gain insight and foresight to: Go beyond simple BI and dashboards about the past. This course will teach you about "data mining" and "predictive analytics", analytical techniques that can provide huge competitive advantage Take advantage of your data and investment in Oracle technology Leverage all the data in your data warehouse, customer data, service data, sales data, customer comments and other unstructured data, point of sale (POS) data, to build and deploy predictive models throughout the enterprise. Learn how to explore and understand your data and find patterns and relationships that were previously hidden Focus on solving strategic challenges to the business, for example, targeting "best customers" with the right offer, identifying product bundles, detecting anomalies and potential fraud, finding natural customer segments and gaining customer insight.

    Read the article

  • mini linux to use dd and access HD and/or USB

    - by acidzombie24
    I was thinking about something. I want to install a 7gb partition and store 2 compress disk image and install linux to it. I want it to be light. What i would like to do is hide the grub loader (or anything) and if i want to reformat my PC press a certain key on startup. Which will then load the linux OS and then i can use dd to restore the partition i want I plan to use windows XP and windows 7 as my main OS and virtualize anything else i need (vista, dummy XP for testing, multiple linux distos, etc). Bonus points if you can tell me how to hide the partition in windows so XP and 7 cant touch it

    Read the article

  • SOA &amp; Application Grid Specialization &ndash; 6 steps to success &ndash; part 1 OMM

    - by Jürgen Kress
    SOA Specialization – Oracle Open Market Model (OMM) Dear Application Grid SOA Partners, Or goal is to SOA Specialize you, in the next weeks we will inform you in a series how you can achieve SOA Specialization. Specialization is key the be recognized by Oracle and to be preferred by our Customers. The first step to become SOA Specialized is to proof 2 transactions. You can either resell, co-sell or referral – as a proof point we do use our Open Market Model (OMM). To create your account go to our new Partner Portal: go to login of your OPN-Homepage: http://oraclepartnernetwork.oracle.com click on: "Sales" "Create a PRM User Account" Enter your User ID: Enter Company Identifier: ((please ask your OPN IC)) Finish Wait for a Confirmation Email If you need OMM support please contact out dedicated team: Nordics  please ask: [email protected] Portugal, Spain please ask: [email protected] Austria, Belgium, Germany, Luxembourg, Netherlands, Switzerland, United Arab Emirates, United Kingdom please ask: [email protected] For more information about OMM watch our on-demand webcast “Recognising the Value of Partners: Register Oracle Deals through the Open Market Model (OMM)”. Become SOA Specialized today SOA Specialized & Application Grid Specialized Create your references, create your OMM Entry, take the SOA Sales assessment, take the SOA Pre-Sales assessment, take the Support assessment and register for the SOA Implementation assessment. For more information on Specialization please visit our OPN Specialized Webcast Series To get support on Specialization please contact the Partner Business Centers.   SOA Specialized Application Grid Specialized Proof 2 transactions with OMM Proof 2 transactions with OMM Create your 2 references Create your 2 references SOA Sales assessment 3, Oracle Application Grid Sales Specialist  SOA Pre-Sales assessment 3 Oracle Application Grid PreSales Specialist Support assessment 1 Support assessment 2 SOA Implementation assessment 4 Application Grid Implementation assessment 4

    Read the article

  • Windows 7 won't read from NAS on LAN

    - by Alfy
    I've got a Linkstation NAS drive on a local network. Having just got a new laptop with Windows 7 Home Professional, I can no longer read anything of the drive. I've tried accessing the drive using \192.168.1.55\share, using ftp programs such as WinSCP, filezilla and even using firefox to hit ftp://192.168.1.55. The really annoying thing is that through these methods I can see the files on the drive, counting out any kind of connection issues. I can navigate through the NAS file system, but as soon as I try and copy a file off the NAS, things just stop working. Accessing the drive through a Windows XP machine works fine. So far I've tried: Disabling firewalls Adding the LmCompatibilityLevel key to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa Using the 40 - 56 bit encryption instead of the 128 bit. Has anyone got any suggestions of what I can check or try? This is driving me crazy and I'm totally out of ideas?

    Read the article

  • “Big Data” Is A Small Concept Unless You Can Apply It To The Customer Experience

    - by Michael Hylton
    There’s been a lot of recent talk in the industry about “big data”.  Much can be said about the importance of big data and the results from it, but you need to always consider the customer experience when analyzing and applying customer data. Personalization and merchandising drive the user experience.  Big data should enable you to gain valuable insight into each of your customers and apply that insight at the moment they are on your Web site, talking to one of your call center agents, or any other touchpoint.  While past customer experience is important, you need to combine that with what your customer is doing on your Web site now as well what they are doing and saying on social networking sites.  It’s key to have a 360 degree view of your customer across all of your touchpoints in order to provide that relevant and consistent experience that they come to expect when interacting with your brand. Big data can enable you to effectively market, merchandize, and recommend the right products to the right customers and the right time.  By taking customer data and applying it to product recommendations, you have an opportunity to gain a greater share of wallet through the cross-selling and up-selling of additional products and services.  You can also build sustaining loyalty programs to continue to engage with your customers throughout their long-term relationship with your brand.

    Read the article

  • “Big Data” Is A Small Concept Unless You Can Apply It To The Customer Experience

    - by Michael Hylton
    There’s been a lot of recent talk in the industry about “big data”.  Much can be said about the importance of big data and the results from it, but you need to always consider the customer experience when analyzing and applying customer data. Personalization and merchandising drive the user experience.  Big data should enable you to gain valuable insight into each of your customers and apply that insight at the moment they are on your Web site, talking to one of your call center agents, or any other touchpoint.  While past customer experience is important, you need to combine that with what your customer is doing on your Web site now as well what they are doing and saying on social networking sites.  It’s key to have a 360 degree view of your customer across all of your touchpoints in order to provide that relevant and consistent experience that they come to expect when interacting with your brand. Big data can enable you to effectively market, merchandize, and recommend the right products to the right customers and the right time.  By taking customer data and applying it to product recommendations, you have an opportunity to gain a greater share of wallet through the cross-selling and up-selling of additional products and services.  You can also build sustaining loyalty programs to continue to engage with your customers throughout their long-term relationship with your brand.

    Read the article

  • How to force Chrome to make bookmarks the priority for auto-complete in the address bar?

    - by NoCatharsis
    As it is right now, if I start typing, for instance, "dictionary" into the address bar, Chrome immediately returns a list of bookmarks, history, and related sites. However, the first and highlighted option is to search Google for "dictionary". I want Chrome to immediately recognize that I have a bookmark specifically named "Dictionary" that links to the site www.dictionary.com. But, that's the second choice, not the first. So I have to type a few letters, get auto-complete to suggest some sites, then key down to my bookmark item before pressing Enter. How annoying. Any way to cut the middle man and make my bookmark the top result?

    Read the article

  • Demantra USA Based Companies and SOX Compliance

    - by user702295
    A USA based company is assessing Demantra Trade Promotion Management (TPM) capability.  It appears that SOX is necessary in their case due to the nature of what TPM does and the necessity for auditability.  Do we have any detail on SOX compliance for Demantra? Answser ------- SOX compliance with regards to IT: 1.  Requires auditing of data changes done by who, what, when     a. Audit trail profiles can be set up for key financial series and view them in audit trail reports     b. One functionality we do not have which typically is asked for is user login history. We have only        active sessions, history is not available. 2.  Segregation of duties     a. With respect to TPM, you could have deduction and financial analyst for settlement be different        from promotion creator, promotion approver or sales team.     b. Budget Approver for funds can be different from funds consumer.     c. Promotion creator can be different than promotion approver     d. For a US customer you may have to write some custom scripts to capture promotion status change        and produce an external report as part of compliance. One additional requirement is transparency of forward commitments entered into with retailers / distributors for trade spending, promotions.  Outside of Demantra - Consumer Goods Trade Funds Analytics.

    Read the article

  • Thinktecture.IdentityModel: WRAP and SWT Support

    - by Your DisplayName here!
    The latest drop of Thinktecture.IdentityModel contains some helpers for the Web Resource Authorization Protocol (WRAP) and Simple Web Tokens (SWT). WRAP The WrapClient class is a helper to request SWT tokens via WRAP. It supports issuer/key, SWT and SAML input credentials, e.g.: var client = new WrapClient(wrapEp); var swt = client.Issue(issuerName, issuerKey, scope); All Issue overrides return a SimpleWebToken type, which brings me to the next helper class. SWT The SimpleWebToken class wraps a SWT token. It combines a number of features: conversion between string format and CLR type representation creation of SWT tokens validation of SWT token projection of SWT token as IClaimsIdentity helpers to embed SWT token in headers and query strings The following sample code generates a SWT token using the helper class: private static string CreateSwtToken() {     var signingKey = "wA…";     var audience = "http://websample";     var issuer = "http://self";       var token = new SimpleWebToken(       issuer, audience, Convert.FromBase64String(signingKey));     token.AddClaim(ClaimTypes.Name, "dominick");     token.AddClaim(ClaimTypes.Role, "Users");     token.AddClaim(ClaimTypes.Role, "Administrators");     token.AddClaim("simple", "test");       return token.ToString(); }

    Read the article

  • httpd.conf variables : What is the difference between ${var} and %{var}?

    - by 108.im
    What is the difference between ${var} and %{var} in httpd.conf? How and when would one use ${} and %{}? http://httpd.apache.org/docs/2.4/configuring.html mentions : The values of variables defined with the Define of or shell environment variables can be used in configuration file lines using the syntax ${VAR}. http://httpd.apache.org/docs/2.4/mod/mod_rewrite.html mentions: Server-Variables:These are variables of the form %{ NAME_OF_VARIABLE } and RewriteMap expansions:These are expansions of the form ${mapname:key|default}. Will ${VAR} be used everywhere in httpd.conf, except in mod_rewrite directive's (like RewriteCond, RewriteRule but except for RewriteMap expansions which use ${} as in RewriteRule ^/ex/(.*) ${examplemap:$1} ) Would a variable set in httpd.conf using SetEnvIf Directive, for use in same httpd.conf, be used as ${var} except when the variable is used with mod_rewrite directive's, where the variable would be used as %{var}?

    Read the article

  • Random Monday Thoughs

    - by Terry Goldman
    On this Monday morning my thoughts center on why is it so hard to embrace governance, any form of governance for that matter, be it software development governance, SOA governance, data governance, IT governance, so on a so fourth?Most customers that I meet tend to think that they don't need don't need governance as all is good within the enterprise. The question I generally pose to colleges and customers, is you have to think of governance as an insurance policy. Take of instance, if you just bought a new car, perhaps your "dream" car, would you drive it on the open streets without having the car insured? Probably not.Governance is what insurance is to new cars, be it to SOA, IT transformations and software development. Governance is a insurance policy against risk of failure. Now once I put it in this context, ask yourself, does governance have value to your organization? Most people now get it. Once the seed of governance is planted at the executive level of an organization, it becomes a exercise in planting an that idea into key personnel within the organization. Then the justification for governance grows and grows across the enterprise.Thats my food for though in this Monday morning.FYI, stay tuned for an upcoming multi-part article on using Oracle Enterprise Repository to build a Enterprise Continuum as described in TOGAF v9.0.

    Read the article

  • ssl between balancer members?

    - by jemminger
    I have apache running on one machine as a load balancer: <VirtualHost *:443> ServerName ssl.example.com DocumentRoot /home/example/public SSLEngine on SSLCertificateFile /etc/pki/tls/certs/example.crt SSLCertificateKeyFile /etc/pki/tls/private/example.key <Proxy balancer://myappcluster> BalancerMember http://app1.example.com:12345 route=app1 BalancerMember http://app2.example.com:12345 route=app2 </Proxy> ProxyPass / balancer://myappcluster/ stickysession=_myapp_session ProxyPassReverse / balancer://myappcluster/ </VirtualHost> Note that the balancer takes requests under SSL port 443, but then communicates to the balancer members on a non-ssl port. Is it possible to have the forwarding to the balancer members be under SSL too? If so, is this the best/recommended way? If so, do I have to have another SSL cert for each balancer member? Does the SSLProxyEngine directive have anything to do with this?

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >