Search Results

Search found 28707 results on 1149 pages for 'writing your own'.

Page 614/1149 | < Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >

  • New SQL Code Deployment Book and Damn I Need to Blog More

    - by Rodney
    Select datediff(d,'02/19/2009',getdate()) This value returned from the above SELECT statement  is 398 and that is the number of days since my last blog post.  As I was formulating my apology for this hiatus from blogging, it dawned on me that I also do not twitter (sorry tweet) and as apologies beget apologies, I then realized that instead of catching up on the backlog of blogs, I should write a book about what I have been most focused on in the past year, one month and 3 days.  That focus is my day job, which of course, most of us have. And that day job we share is why most of us read blogs, tweets, articles and even books in the first place. So my focus for the past year has been SQL code deployments and all of that entails. I am fortunate that Redgate has agreed to entertain my crazy notion of writing an entire book about this subject, which I have tentatively titled, "The Sound and the Fury". Wait..that is not right. Oh yes, a title more befiting a techical tome but with as much profundity, "Standardizing SQL Server Code Deployements - A Redgate Guide". The great American novel must wait a few more years. As I begin this journey, I am inviting you to assist me in the discovery process and even be interviewed and included in the book itself. How do you do deployments in your company? Do you have a documented process or no process? Do you do code review or cross your fingers? Do you work for a small company or a Fortune 100 company? Government regulations or  garage? It does not  matter to me. I am not here to judge. I worked for both companies myself and have seen many things that you can relate to.  If you would like to participate and are one of the 3 people still reading this blog after 398 days, please fill out my survey and let's get started.  http://www.surveymonkey.com/s/RRG86RH  

    Read the article

  • make vnc server listen on guest's ip address

    - by gucki
    My host system has the IP 192.168.0.250. Now I want to create a kvm guest using a tap device (so the network card of the guest just acts like a "real" one). The guest has a static ip 192.168.0.249 which it setups on his own (no dhcp). To connect to the guest using VNC I can to use the host's IP. So far everything works fine. Now I wonder how I can make the VNC server to listen on the guest's IP address, so I can use the guest's IP address to connect using my vnc client. Of course I cannot use -vnc 192.168.0.249:1 as this IP is not active on the host and so fails with Cannot assign requested address. Can this be done with tap networking at all? If not, how to get it working?

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • How to manage a multiplayer asynchronous environment in a game

    - by Phil
    I'm working on a game where players can setup villages, which can contain defending units. Any of these units (each on their own tiles) can be set to "campaign" which means they are no longer defending but can now be used to attack other villages. And each unit on a tile can have up to a 100 health. So far so good. Oh and it's all asynchronous so even though the server will be aware that your village is being attacked, you won't be until the attack is over. The issue I'm struggling with, is the following situation. Let's say a unit on a tile is being attacked by a player from another village. The other player see's your village and is attacking your units. You don't know this is happening though, so you set your unit to campaign and off you go to attack another village, with the unit which itself is actually being attacked by this other player. The other player stops attacking your village and leaves your unit with say a health of 1, which is then saved to the server. You however have this same unit are attacking another village with it, but now you discover that even though it started off with a 100 health, now mysteriously it only has 1... Solutions? Ideas? Edit The simplest solutions are often the best. I referred to Clash of clans below, well after a bit more digging it seems that in CoC you can only attack players that are offline! ha, that almost solves the problem. I say almost because there's still the situation where a players village could be in the process of being attacked when they come back online, still need to address that. Edit 2 A solution to the "What happens when a player is attacking your village and you come online" issue, could be the attacking player just get's kicked out of the village at that point and just get's whatever they had won up to that point, it's a bit of a fudge but it might work.

    Read the article

  • Data validation best practices: how can I better construct user feedback?

    - by Cory Larson
    Data validation, whether it be domain object, form, or any other type of input validation, could theoretically be part of any development effort, no matter its size or complexity. I sometimes find myself writing informational or error messages that might seem harsh or demanding to unsuspecting users, and frankly I feel like there must be a better way to describe the validation problem to the user. I know that this topic is subjective and argumentative. StackOverflow might not be the proper channel for diving into this subject, but like I've mentioned, we all run into this at some point or another. There are so many StackExchange sites now; if there is a better one, feel free to share! Basically, I'm looking for good resources on data validation and user feedback that results from it at a theoretical level. Topics and questions I'm interested in are: Content Should I be describing what the user did correctly or incorrectly, or simply what was expected? How much detail can the user read before they get annoyed? (e.g. Is "Username cannot exceed 20 characters." enough, or should it be described more fully, such as "The username cannot be empty, and must be at least 6 characters but cannot exceed 30 characters."?) Grammar How do I decide between phrases like "must not," "may not," or "cannot"? Delivery This can depend on the project, but how should the information be delivered to the user? Should it be obtrusive (e.g. JavaScript alerts) or friendly? Should they be displayed prominently? Immediately (i.e. without confirmation steps, etc.)? Logging Do you bother logging validation errors? Internationalization Some cultures prefer or better understand directness over subtlety and vice-versa (e.g. "Don't do that!" vs. "Please check what you've done."). How do I cater to the majority of users? I may edit this list as I think more about the topic, but I'm genuinely interest in proper user feedback techniques. I'm looking for things like research results, poll results, etc. I've developed and refined my own techniques over the years that users seem to be okay with, but I work in an environment where the users prefer to adapt to what you give them over speaking up about things they don't like. I'm interested in hearing your experiences in addition to any resources to which you may be able to point me.

    Read the article

  • Top 10 OTN Tech Articles for 2012

    - by Bob Rhubart
    It takes a special kind of IT pro to risk additional carpal tunnel damage to pound out a technical article after spending the day wrestling with a keyboard in dealing with other duties. That kind of dedication is noteworthy, even more so if people actually take the time to read the resulting article. So if you know any of the authors listed below, skip the handshake and give them a congratulatory slap on the back for all that time spent torturing their tendons. Their hard work has earned a place on this list of  the Top 10 most popular OTN articles published in 2012.  Getting Started with Java SE Embedded on the Raspberry Pi by Bill Courington and Gary Collins How Dell Migrated from SUSE Linux to Oracle Linux by Jon Senger, Aik Zu Shyong, and Suzanne Zorn Exploring Oracle SQL Developer by Przemyslaw Piotrowski Getting Started with Oracle Unbreakable Enterprise Kernel Release 2 by Lenz Grimmer How to Get Started (FAST!) with JavaFX 2 and Scene Builder by Mark Heckler How to Use Oracle VM VirtualBox Templates by Yuli Vasiliev How to Update Oracle Solaris 11 Systems From Oracle Support Repositories by Glynn Foster Tips for Hardening an Oracle Linux Server by Lenz Grimmer and James Morris How To Configure Browser-based SSO with Kerberos/SPNEGO and Oracle WebLogic Server by Abhijit Patil How to Create a Local Yum Repository for Oracle Linux by Jared Greenwald Of course, OTN has a great many articles covering a broad range of topics of interest to Java developers, DBAs, sysadmins, solution architects, and everybody else who works keeping the IT world running. You'll find them here. If you have suggestions for topics or technologies you'd like to see covered, please let us know. And if you have insight and expertise to share, why not write your own article? Click here to learn how to get published on OTN.

    Read the article

  • Increase application performance

    - by Prayos
    I'm writing a program for a company that will generate a daily report for them. All of the data that they use for this report is stored in a local SQLite database. For this report, the utilize pretty much every bit of the information in the database. So currently, when I query the datbase, I retrieve everything, and store the information in lists. Here's what I've got: using (var dataReader = _connection.Select(query)) { if (dataReader.HasRows) { while (dataReader.Read()) { _date.Add(Convert.ToDateTime(dataReader["date"])); _measured.Add(Convert.ToDouble(dataReader["measured_dist"])); _bit.Add(Convert.ToDouble(dataReader["bit_loc"])); _psi.Add(Convert.ToDouble(dataReader["pump_press"])); _time.Add(Convert.ToDateTime(dataReader["timestamp"])); _fob.Add(Convert.ToDouble(dataReader["force_on_bit"])); _torque.Add(Convert.ToDouble(dataReader["torque"])); _rpm.Add(Convert.ToDouble(dataReader["rpm"])); _pumpOneSpm.Add(Convert.ToDouble(dataReader["pump_1_strokes_pm"])); _pumpTwoSpm.Add(Convert.ToDouble(dataReader["pump_2_strokes_pm"])); _pullForce.Add(Convert.ToDouble(dataReader["pull_force"])); _gpm.Add(Convert.ToDouble(dataReader["flow"])); } } } I then utilize these lists for the calculations. Obviously, the more information that is in this database, the longer the initial query will take. I'm curious if there is a way to increase the performance of the query at all? Thanks for any and all help. EDIT One of the report rows is called Daily Drilling Hours. For this calculation, I use this method: // Retrieves the timestamps where measured depth == bit depth and PSI >= 50 public double CalculateDailyProjectDrillingHours(DateTime date) { var dailyTimeStamps = _time.Where((t, i) => _date[i].Equals(date) && _measured[i].Equals(_bit[i]) && _psi[i] >= 50).ToList(); return _dailyDrillingHours = Convert.ToDouble(Math.Round(TimeCalculations(dailyTimeStamps).TotalHours, 2, MidpointRounding.AwayFromZero)); } // Checks that the interval is less than 10, then adds the interval to the total time private static TimeSpan TimeCalculations(IList<DateTime> timeStamps) { var interval = new TimeSpan(0, 0, 10); var totalTime = new TimeSpan(); TimeSpan timeDifference; for (var j = 0; j < timeStamps.Count - 1; j++) { if (timeStamps[j + 1].Subtract(timeStamps[j]) <= interval) { timeDifference = timeStamps[j + 1].Subtract(timeStamps[j]); totalTime = totalTime.Add(timeDifference); } } return totalTime; }

    Read the article

  • Troubleshooting source of heavy resource-usage on a windows server 2008 running multiple sites

    - by batman_man
    Hi, I am running about 10 asp.net websites on a hosted virtual server. The server runs Server 2008 - each website is backed by its own database running on SQL server 2008 on the same box. Lately the box has seemed really slow. The only kind of discovery i could think of doing was looking in the task manager, where i can see w3wp and sqlserver.exe jumping to 40% cpu usage every 5-10 seconds. What are the steps i can take to determine which of my websites is taking these resources and or what database is getting hit the most? I have of course ssms installed on the machine as well. As you can tell, my sysadmin skills are very very limited - any help would be much appreciated.

    Read the article

  • Announcing Monthly Silverlight Giveaways on MichaelCrump.Net

    - by mbcrump
    I've been working with different companies to give away a set of Silverlight controls to my blog readers for the past few months. I’ve decided to setup this page and a friendly URL for everyone to keep track of my monthly giveaway. The URL to bookmark is: http://giveaways.michaelcrump.net. My main goal for giving away the controls is: I love for the Silverlight Community and giving away nice controls always sparks interest in Silverlight. To spread the word about a company and let the user decide if it meet their particular situation. I am a “control” junkie, it helps me solve my own personal business problems since I know what is out there. Provide some additional hits for my blog. Below is a grid that I will update monthly with the current giveaway. Feel free to bookmark this post and visit it monthly. Month Name Giveaway Description Blog Post Detailing Controls October 2010 Telerik Silverlight Controls –RadControls http://michaelcrump.net/archive/2010/10/15/win-telerik-radcontrols-for-silverlight-799-value.aspx November 2010 Mindscape Mindscape Mega-Pack http://michaelcrump.net/archive/2010/11/11/mindscape-silverlight-controls--free-mega-pack-contest.aspx December 2010 Infragistics Silverlight Controls + Silverlight Data Visualization http://michaelcrump.net/mbcrump/archive/2010/12/15/win-a-set-of-infragistics-silverlight-controls-with-data-visualization.aspx January 2011 Cellbi Cellbi Silverlight Controls http://michaelcrump.net/mbcrump/archive/2011/01/05/cellbi-silverlight-controls-giveaway-5-license-to-give-away.aspx February 2011 *BOOKED*     To Third Party Silverlight Companies: If you create any type of Silverlight control/application and would like to feature it on my blog then you may contact me at michael[at]michaelcrump[dot]net. Giving away controls has proven to be beneficial for both parties as I have around 4k twitter followers and average around 1000 page views a day. Contact me today and give back to the Silverlight Community.  Subscribe to my feed

    Read the article

  • SQL SERVER – CSVExpress and Quick Data Load

    - by pinaldave
    One of the newest ETL tools is CSVexpress.com.  This is a program that can quickly load any CSV file into ODBC compliant databases uses data integration.  For those of you familiar with databases and how they operate, the question that comes to mind might be what use this program will have in your life. I have written earlier article on this subject over here SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress. You might know that RDBMS have automatic support for loading CSV files into tables – but it is not quite as easy as one click of a button.  First of all, most databases have a command line interface and you need the file and configuration script in order to load up.  You also need to know enough to write the script – which for novices can be extremely daunting.  On top of all this, if you work with more than one type of RDBMS, you need to know the ins and outs of uploading and writing script for more than one program. So you might begin to see how useful CSVexpress.com might be!  There are many other tools that enable uploading files to a database.  They can be very fancy – some can generate configuration files automatically, others load the data directly.  Again, novices will be able to tell you why these aren’t the most useful programs in the world.  You see, these programs were created with SQL in mind, not for uploading data.  If you don’t have large amounts of data to upload, getting the configurations right can be a long process and you will have to check the code that is generated yourself.  Not exactly “easy to use” for novices. That makes CSVexpress.com one of the best new tools available for everyone – but especially people who don’t want to learn a lot of new material all at once.  CSVexpress has an easy to navigate graphical user interface and no scripting or coding is required.  There are built-in constraints and data validations, and you can configure transforms and reject records right there on the screen.  But the best thing of all – it’s free! That’s right, you can download CSVexpress for free from www.csvexpress.com and start easily uploading and configuring riles almost immediately.  If you’re currently happy with your method of data configuration, keep up with the good work.  For the rest of us, there’s CSVexpress.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Unknown MySQL server host - connection problem

    - by Zukas
    I am new to databases and I have been asked to look at a few tables and see how many records they have and some other information. I cannot access phpMyAdmin through cPanel, which is how I've always done it on my own server. I decided to download MySQL Workbench. I enter in all the information is asks: Hostname: mysite.startlogicmysql.com Port: 3306 Username: user. I press connect and get this Unknown MySQL server host 'mysite.startlogicmysql.com' (11004) Am I using the wrong hostname? I've seen a server name, a hostname in the server variables list which is something like custsql.eigbox.net and the server itself is custsql.eigbox.net In both cases the custsql is a little different than what I posted. I am not sure which one to use. If there is anything else anyone needs to know I can tell you. Tanks.

    Read the article

  • Windows 8 Upgrade : From Windows 7 Trial

    - by Golmaal
    This is a bit complicated it seems. I own Windows Vista Ultimate 64-bit (Retail). It was okay, but around a couple of weeks ago I had some system crash and at that time I decided that I will install Windows 8 as soon as it comes out. However, because of some problems in Vista, at the time of crash, I installed Windows 7 trial. I had some urgent work to do which I accomplished and then I switched the PC off. Now I have purchased Windows 8 Pro Upgrade ($40 version). If I go for a clean install, will it be able to install Windows 8 on not-activated Windows 7? During activation, if it asks for Vista serial number, I can provide it. Or will I first have to install Vista and then only it will allow me to install Windows 8? Also, I used the Upgrade Assistant to download Windows 8 on my laptop (Windows 7 OEM). Will it work on my above mentioned desktop?

    Read the article

  • Cannot FTP without simultaneous SSH connection?

    - by Lucas
    I'm trying to set up an old box as a backup server (running 10.04.4 LTS). I intend to use 3rd party software on my PC to periodically connect to my server via FTP(S) and to mirror certain files. For some reason, all FTP connection attempts fail UNLESS I'm simultaneously connected via SSH. For example, if I use putty to test the connection to port 21, the system hangs and times out. I get: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] <cursor> However, when I'm simultaneously logged in (in another session) everything works: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] 230 Login successful. Basically, this means that my software will never be able to connect on its own, as intended. I know that the correct port is open because it works (sometimes) and nmap gives me: Starting Nmap 5.00 ( http://nmap.org ) at 2012-03-20 16:15 CDT Interesting ports on xx.xxx.xx.x: Not shown: 995 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 53/tcp open domain 139/tcp open netbios-ssn 445/tcp open microsoft-ds Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds My only hypothesis is that this has something to do with iptables. Maybe it's allowing only established connections? I don't think that's how I set it up, but maybe? Here's my iptables rules for INPUT: lucas@rearden:~$ sudo iptables -L INPUT Chain INPUT (policy DROP) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ufw-before-logging-input all -- anywhere anywhere ufw-before-input all -- anywhere anywhere ufw-after-input all -- anywhere anywhere ufw-after-logging-input all -- anywhere anywhere ufw-reject-input all -- anywhere anywhere ufw-track-input all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:ftp I'm using vsftpd. Any thoughts/resources on how I could fix this? L

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Am I correctly handling duplicate URLs for my homepage?

    - by Rob Goldstein
    I own a Job Search site named www.conservationjobboard.com and have a concern about how the domain is viewed by search engines. The issue is that when the site was first designed, the default page was left as default.php, but the homepage was actually JobBoard.php. To handle this, the default.php page performed a redirect to the JobBoard.php file when www.conservationjobboard.com/ was requested. The main problem resulted because the redirect was a temporary redirect causing search engines to index conservationjobboard.com/ and conservationjobboard.com/JobBoard.php as 2 separate pages. This has since been corrected to use the .htaccess file so that JobBoard.php is now the default file for the root directory eliminating the need for the redirect. Problem is that search engines still show both URL's in search results (one including JobBoard.php and one that ends with /). Another potential problem is that some of my early backlinks are to conservationjobboard.com/JobBoard.php while the rest are to conservationjobboard.com The 2 outstanding questions are as follows: 1. Is my domain still being penalized by search engines like Google for having duplicate homepage URL's? 2. Are all of the back links to my homepage being considered as the same now or is the total number of back links being split between the 2 different URL's? If you think there are still issues with how we have this set-up, I was wondering if you could give me advice on what we should do differently. Thanks.

    Read the article

  • Making a cracked or activated windows uncracked

    - by ugurcode
    I have a pc which has windows 7 license but I installed windows from an image i downloaded and it is already activated. for validating genuine microsoft, i need to entet my own product key but the necessary activation tools do not exist in my windows folder. What do? I googled stuff but because the keywords are too broad I couldn't find a useful tool DAZ doesn't work, activation button doesn't show up. When I enter my original key to Windows Anytime Upgrade, I get this error When I attempt using slmgr, I get this error I used sfc /scannow Now slmgr is existing, I entered slmgr.vbs -ipk XXXXX-XXXXX-XXXXX-XXXXX-XXXXX (replacing X es with the cd key) the operation successful. Now I have installed Microsoft Security essentials, which means the problem is solved. Main steps are here open cmd enter Enter "sfc /scannow" enter slmgr.vbs -ipk XXXXX-XXXXX-XXXXX-XXXXX-XXXXX Success

    Read the article

  • First time application where to start?

    - by Nazariy
    After many years of searches and copy pasting, I'm still looking for simple solution that can transliterate text input on the fly from one key set to another. There are quite few online services that provide this feature but it still quite annoying to go online all the time. Unfortunately there is not that many applications left which are capable of doing so, and none of them supported by this day. I decided to make my own and at same time to learn something new for my self. The idea is quite simple: application should sit in system tray and wait until input language get changed, for example to Russian. If Russian language is activated, application should start to listen for user key strokes combination and replace them based on custom dictionary for example R = ?, SH = ? etc. I should be able to bind application to any installed language (Russian, Ukrainian, Bulgarian, Belarusian etc.) and customise dictionary for any of them. So my question is: Which language should I chose for this task C++, C# or might be something hardcore like Assembler, as application should work natively with Windows XP/Vista/7 or possibly Mac. (cross platform support is good but my main target is Windows) Due to nature of application behaviour how can I tell anti-virus software that it is not a "Key Logger" and basically not a virus? Where should I start and what should I be aware of? P.S. My current programming knowledge is quite basic, PHP and JavaScript with Object Oriented approach.

    Read the article

  • Cocos2d: Tongue effect like in Munch Time

    - by Joey Green
    I'm wanting to do a tongue effect for my character like the one in Munch Time( shown in pic ). The player does some action and his tongue attaches to the nearest platform. I'm thinking this is simply a get distance to platform and keep player at that distance as he moves back and forth giving him the swinging effect. For the drawing, I'm wanting the same effect where the tongue sprite is the skinniest in the middle of the distance between the character and platform. I know how to do this in a shader( I'm using cocos2d v2 btw ), but I'm wondering if there is some built-in functionality to allow me to do this. First, is this the right approach using distance? Second, is their an easy way to do the tongue sprite effect without a shader? Third, I'm wanting to have the player spring up at will in the direction of the platform. I'm using Box2D. Would there be a way to do this using force's or would it be easier to write my own code?

    Read the article

  • SCCM 2012 R2 - OSD Task Sequence failure on physical computers

    - by user1422136
    I'm trying to deploy windows 7 with SCCM 2012 R2 to physical desktops and laptops. But the task sequence keeps failing, no matter what I try. When I try it on a VM it works fine. However, when I try it on a physical computer it fails. So I think it has something to do with drivers, but I already tried both the "auto apply drivers" + wmi query for model method, and also the "apply driver package" + wmi query for model method. In the link below I added a zip file, containing two other zip files. One is a captured log from a failed osd on a desktop, the other is the export of my task sequence. Download zip-file with log and TS If anyone could resolve the issue, or share their own task sequence for such a task (pure sccm 2012 (R2), no MDT), that would be great.

    Read the article

  • SCCM 2012 R2 - OSD Task Sequence failure on physical computers

    - by Svanste
    I'm trying to deploy windows 7 with SCCM 2012 R2 to physical desktops and laptops. But the task sequence keeps failing, no matter what I try. When I try it on a VM it works fine. However, when I try it on a physical computer it fails. So I think it has something to do with drivers, but I already tried both the "auto apply drivers" + wmi query for model method, and also the "apply driver package" + wmi query for model method. In the link below I added a zip file, containing two other zip files. One is a captured log from a failed osd on a desktop, the other is the export of my task sequence. Download zip-file with log and TS If anyone could resolve the issue, or share their own task sequence for such a task (pure sccm 2012 (R2), no MDT), that would be great.

    Read the article

  • Thoughts on Nexus in SQL Server PDW

    - by jamiet
    I have been on a SQL Server Parallel Data Warehouse (aka PDW) training course this week and was interested to learn that you can't (yet) use SQL Server Management Studio (SSMS) against PDW, instead they ship a 3rd party tool called Nexus Chameleon. This was a bit of a disappointment at the beginning of the week (I'd prefer parity across SQL Server editions) but actually, having used Nexus for 3 days, I'm rather getting used to it. Some of it is a bit clunky (e.g. everything goes via an ODBC DSN) but once you get into using it its the epitome of "it just works". For example, over the past few years I have come to rely on intellisense in SSMS and have learnt to cope with its nuances. There is no intellisense in Nexus but you know what....I don't really miss it that much. In a sense its a breath of fresh air not having to hope that you've crossed the line into that will it work/won't it work grey area with SSMS intellisense. And I don't end up with writing @@CONNECTIONS instead of FROM anymore (anyone else suffer from this?) :) Moreover, Nexus is a standalone tool. Its not a bunch of features shoehorned into something else (Visual Studio). Another thing I like about Nexus is that you can actually do something with your resultset client-side. Take a look at the screenshots below:   You can see Nexus allows you to group a resultest by a column or set of columns. Nice touch. I know that many people have submitted Connect requests asking for the ability to do similar things in SSMS that would mean we don't have to copy resultsets into Excel (I know I have) - Nexus is a step in that direction. Its refreshing to use a tool that just gets out of the way yet still has some really useful features. How ironic that it gets shipped inside an edition of SQL Server! If I had the option of using Nexus in my day job I suspect that over time I would probably gravitate back to SSMS because as yet I haven’t really stretched Nexus’ capabilities, overall SSMS *does* have more features and up until now I've never really had any objections to it ... but its been an interesting awakening into the nuances that plague SSMS. Anyone else used Nexus? Any thoughts on it? @Jamiet

    Read the article

  • Atheros wireless not working

    - by Chandru1
    I have been struggling hard since i have installed Ubuntu 10.10 but it has been difficult for me to get my wifi working. So here is what i tried. First i checked whether i have the driver using the ifconfig command and it shows the wireless lan driver as wlan0. Next, i tried the command iwlist wlan0 scanning by becoming the root which gave me the output as no scan results. Next, i visited this link https://help.ubuntu.com/community/WifiDocs/Driver/Atheros to see as to what problem my laptop may have. I do own have an ath5k chipset. And as i followed the instructions in the above link in one of the blacklist-ath_pci.conf file had this written in it. For some Atheros 5K RF MACs, the madwifi driver loads buts fails to correctly initialize the hardware, leaving it in a state from which ath5k cannot recover. To prevent this condition, stop madwifi from loading by default. Use Jockey to select one driver or the other. (Ubuntu: #315056, #323830 I am not that good at Linux but i have given it a try. I am desperate to have my wifi working and i would be glad if this community could help. ADDED: If anyone would like to know as to what drivers i am using this is the output. network description: Wireless interface product: AR2413 802.11bg NIC vendor: Atheros Communications Inc. physical id: 3 bus info: pci@0000:0a:03.0 logical name: wlan0 version: 01 serial: 00:19:7d:d3:0c:fd width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath5k driverversion=2.6.35-24-generic firmware=N/A latency=168 link=no maxlatency=28 mingnt=10 multicast=yes wireless=IEEE 802.11bg resources: irq:18 memory:d0000000-d000ffff Some more information and output as to what i have done. lsmod | grep ath ath5k 130083 0 mac80211 231541 1 ath5k ath 8153 1 ath5k cfg80211 144470 3 ath5k,mac80211,ath led_class 2633 1 ath5k

    Read the article

  • Security risk of JIRA standalone installation running JRE version 1.6.0_26 vs 1.6.0_29 (latest)

    - by kayaker243
    Atlassian recently introduced a standalone installer that installs JIRA, along with its own JRE. Unfortunately the JRE Atlassian bundles with this installer is 1.6.0_26, whereas the current version of the JRE is 1.6.0_29. This is potentially concerning given there were vulnerabilities in _26 that were fixed in the subsequent versions. We are currently using the bundled-installer version of JIRA and one contractor has recommended we ditch this for the system-installed JRE. My question is this: what is the actual security risk of continuing to use the _26 version of the JRE included in the bundled installer? There is no public access to our install of JIRA (only about 20 employees and contractors can login to our JIRA) and it's only accessible on a subdomain of a domain at which there's no publicly-available website. If there's a not insignificant risk inherent in sticking with the older JRE, why hasn't Atlassian upgraded the default JRE?

    Read the article

  • Can I easily use a VPN to duplicate SSH Tunneling functionality?

    - by Steve V.
    Right now, when I want to use an unsecured wireless connection with my (Linux) laptop, I secure my connection using a variation of the method provided here. However, to the best of my knowledge, the (non-jailbroken) iPad does not allow applications to tunnel traffic through local ports. However, it does seem to allow certain VPN traffic. I have never set up, or even used, a VPN before. I'm looking for confirmation that I'm not barking up the wrong tree before I invest significant effort into setting up my own VPN server. If I want to secure my wireless iPad traffic over an unsecure wireless connection, would I be on the right track by looking at a VPN?

    Read the article

  • Excel scatter chart with multiple date ranges

    - by Abiel
    I have multiple blocks of time series data on an Excel sheet, with each block having its own set of dates. For example, I might have dates in column A, values in column B, and then dates in column D and values in column E. The values in B go with the dates in A, and the values in E go with the dates in D. The dates in A and D may not be the same. I would like to create a scatter chart with a time category axis that is the union of my two input date ranges in columns A and D. If I select all the data and then go insert chart (in Excel 2010), Excel treats only column A as the X axis, and looks at D as just another set of values. I can get Excel to do what I want by first just charting columns A and B, then selecting D and E and copy-pasting onto the chart. However, I would like to avoid this two-step procedure if possible.

    Read the article

< Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >