Search Results

Search found 19533 results on 782 pages for 'machine specifications'.

Page 447/782 | < Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >

  • SQL SERVER – Fix: Error: Compatibility Level Drop Down is Empty

    - by Pinal Dave
    I currently have SQL Server 2012 and SQL Server 2014 both installed on the same machine. My job requires me to travel a lot and I like to travel light. Hence, I have only one computer with all the software installed in it. I can install Virtual Machines but as I was able to install SQL Server 2012 and SQL Server 2014 side by side, I just went ahead with that option. Now one day when I opened up my SQL Server 2014 and went to the properties of the my database, I realized that the dropdown box for Compatibility level is empty. I just can’t select anything there or see what is the current Compatibility level of the database. This was the first time for me so I was bit confused and I tried to search online. Upon searching online I realize that if I was not the first, there are very few questions on this subject on various forums as well as there is no convincing answer to this problem online. That means, I was pretty much first one to face this error. See the image of the situation I was facing. Now I decided to resolve this issue as soon as I can. I spent a few minutes here and there and realize my mistake. I had connected to SQL Server 2014 instance from SQL Server 2012 Management Studio. Hence, I was not able to see any compatibility related settings. Once I connected to SQL Server 2014 instance with SQL Server 2014 Management Studio – this issue was resolved. Well, simple things sometimes keep us very busy. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Build a RAID-5 USB Array or NAS?

    - by TK Kocheran
    I'm looking to build a RAID-5 or RAID-6 hard disk array to be used as a centralized backup location for my home network. I have a lot of 4-8GB DVD ISO files which I intend to serve over the network to my TV using DLNA or something similar. I need decent write times, fast read times, and preferably a way to connect this array to my network router over USB. I'll probably be using ext3/4 on it (I'd use btrfs if my router supported it :-[). Is there a way to build a RAID array that is accessible over USB? I'm kind of new to this and most guides assume you're running a RAID array inside your machine. I need at least 5TB of space, and I need redundancy, so I prefer RAID-6. How can I go about accomplishing this?

    Read the article

  • Host name resolution on a home network

    - by Kris
    Hi, I have several machines (both virtual and physical) in my internal network at home. Currently I have to connect via 1P addresses. The one main machine I connect with to all the other machines is running Windows Vista. Is there a way I can have some sort of DNS capability inside my network as well so I can refer to these machines with a name? I think this would be a common problem in most households (running a few computers) and I think there might be some simple solutions out there. This would be something most routers should support out of the box - but why don't they? Can anyone recommend some of these or an easy way to accomplish this?

    Read the article

  • Host name resolution on a home network

    - by Kris
    Hi, I have several machines (both virtual and physical) in my internal network at home. Currently I have to connect via 1P addresses. The one main machine I connect with to all the other machines is running Windows Vista. Is there a way I can have some sort of DNS capability inside my network as well so I can refer to these machines with a name? I think this would be a common problem in most households (running a few computers) and I think there might be some simple solutions out there. This would be something most routers should support out of the box - but why don't they? Can anyone recommend some of these or an easy way to accomplish this?

    Read the article

  • Windows 2008R2 blocks outbound LDAP for non-admins?

    - by Jon Bailey
    I've got a Windows 2008R2 terminal server with ~30 users on it. It's joined to a Samba-based domain. During the login script, we connect directly to the LDAP server to pull out certain profile information. This used to work just fine. Now, it doesn't, but only for non-local-admin accounts. Local admins work fine. As a non-local-admin: Connection to ports 389 or 636 just terminate (wireshark on the LDAP server reveals no connection attempt) Connection to other ports on the same server work fine Same thing on multiple LDAP servers Windows firewall is disabled Can't find any other rules/policies that may block this I suspect since this used to work, it came down during an update, but for the life of me, I can't find what. EDIT: I just ran Wireshark on the machine and didn't see anything when connecting to the LDAP server in question (or any LDAP server for that matter). I can, however, see traffic when I connect to that server on another port.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • "Breadcrumbs" for series of hostnames?

    - by Hamy
    Does anyone know of a shell that would show a series of breadcrumbs as I navigate into/out of various servers, like this: Home > Build Machine > Vagrant > Docker-base Hopefully it could auto-detect logging in and out of various boxes and display the hostnames. Perhaps with a simple "no circular links", one could just try and monitor the hostname, but I don't know if there is a shell that can easily act as a 'parent' to the other shells on these various systems so that it can query hostname and/or other item. Any thoughts?

    Read the article

  • SSH broken after hostname change on EC2-hosted Ubuntu

    - by dimadima
    I changed my instance's hostname using the hostname utility and then set it in /etc/hostname so that the new name survives reboot. My main motivation was for differentiating between instances at the prompt using the \h format in PS1. EDIT I also changed permissions on my home directory. I made my home directory group writeable. END EDIT Now I can no longer SSH into the machine. The short of it is the error Permission denied (publickey). Running ssh -v, the more verbose output is: debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dmitry/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/dmitry/.ssh/ec2key.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Should I have done something after changing the hostname? Now I can't get into the instance! :(

    Read the article

  • Serve web application error messages from Http server [closed]

    - by licorna
    I have nginx as a http server with tomcat as a backend (using proxy_pass). It works great but I want to define my own error pages (404, 500, etc.) and that they are served by nginx and not tomcat. For example I have the following resource: https://domain.com/resource which doesn't exist. If I [GET] that URL then I get a Not Found message from Tomcat and not from nginx. What I want is that every time Tomcat responds with a 404 (or any other error message) nginx sends itself a message to the user: some html file accessible by nginx. The way I have my nginx server configured is very easy, just: location / { proxy_pass http://localhost:8080/<webapp-name>/; } And I've configured port 8080, which is tomcat, as not accessible from outside this machine. I don't think that using different location directives in nginx configuration will work, because there are some resources that depend on the URL: https://domain.com/customer/<non-existent-customer-name>/[GET] Will always return 404 (or any other error message), while: https://domain.com/customer/<existent-customer>/[GET] Will return anything different from 404 (the customer exists). Is there any way of serving Tomcat (Application Server) error messages with Nginx (http Server)? To check the message sent by the proxy_pass directive and act upon it?

    Read the article

  • Windows Live Messenger giving me 8100030d on sign-in

    - by Rob Farley
    I'll bullet point things of note: Other accounts work on the same machine. This is a problem across a few machines. I can sign in no problem on some machines. I can use Web Messenger on the problem machines. I'm using the latest version of Messenger. I've uninstalled, rebooted, reinstalled, rebooted. Deleting the "Windows Live Contacts" folder doesn't help. Deleting the Contacts registry entries doesn't work. I'm not using AVG (and besides, can sign in with other accounts) It successfully connects (and signs me out of eBuddy) before erroring. It creates files in the Windows Live Contacts folder. Windows XP running on the problem machines. I'm not sure what other things it could be... This has also been asked at http://www.experts-exchange.com/Software/Internet_Email/Chat_-_IM/Q_25449997.html, so let's see which system gets me the better response...

    Read the article

  • HP ProBook 4720s UEFI boot only manually into 12.04

    - by HainjeDAF
    I'm trying to get UEFI boot on my ProBook 4720s. Because I swapped the HDD for a SSD I had a blank canvas to start. The 12.04 Live DVD refuses to boot into UEFI, as do Alternate and desktop CD's. However, when I make a 16Gb flashdrive into a live FS using the bootdisk tool in ubuntu, I can boot from USB, manually into UEFI mode. It even sidesteps to DVD as medium when I boot from USB with 12.04 Live DVD present. I installed a GPT partitiontable with part 1, label EFI, fs FAT32, flag BOOT, mounts at /boot/efi part 2, label Linux-ROOT, fs ext4, no flags, mounts at / part 3, label Linux-SWAP, fs swap, no flags, mounts as swap So far, My system refuses to boot from harddrive by itself. I have to select "Boot from EFI file" and manually browse to (HD0,GPT1)\EFI\ubuntu\grubx64.efi any other option ends in "no system disk, please insert boot disk" I tried installing BURG, but that merely enforces non-efi boot. I tried most of the solutions I could find, but one says \EFI\grub\grub.cfg next says \EFI\ubuntu\ubuntu,cfg I'm confused and getting frustrated. How do I correctly install Ubuntu 12.04 in UEFI mode on this machine???

    Read the article

  • I just got a linode VPS a week ago and I've been flagged for SSH scanning

    - by meder
    I got a 32-bit Debian VPS from http://linode.com and I really haven't done any sort of advanced configuration for securing it ( port 22; password enabled ). It seems somehow there is ssh scanning going on from my IP, I'm being flagged as this is against the TOS. I've been SSHing only from my home Comcast ISP which I run Linux on. Is this a common thing when getting a new vps? Are there any standard security configuration tips? I'm quite confused as to how my machine has been accused of this ssh scanning.

    Read the article

  • DHCP Client Can't Find DHCP Server

    - by leeman24
    I currently have 3 machines: CentOS (router) eth1 - 18.0.168.1 eth2 - 145.165.34.1 Windows Server 2008 (server) 18.0.168.2 DHCP scope - 145.165.34.10 - 145.165.34.20 Windows 7 (client) Supposed to use DHCP I can't get my Windows 7 client to get an address from the Windows Server 2008 DHCP server. Every network interface can ping each other (ex. 18.0.168.2 can ping 18.0.168.1 & 145.165.34.1 and the other way around). My Linux machine acting as the router has default IP tables. Other than this command which may or may not be right: iptables -I INPUT -p udp -d 18.0.168.2 --dport 67:68 -j ACCEPT I have also tried it after I flushed the IP tables. I was looking at the dhcrelay command but it seems CentOS doesn't have it and I am not even sure how to use it.

    Read the article

  • What's the best way to know if your web server goes down?

    - by Mike Christensen
    I noticed my website only got 8 visitors today, which means it probably went down very early this morning and I never noticed. Why it went down is another story. Ideally, I'd like to be emailed if my web server becomes unresponsive or does not return an HTTP 200. Is there either a cloud-based service (either free or pretty cheap) that can monitor your website? If not, is there a good free/open source program I can run on either a Linux or Windows machine that will monitor a website and email me if it goes down? Thanks!

    Read the article

  • Can't copy files with 'additional permissions' to ext4 drive -- files that have @ after permissions,

    - by 99miles
    I am copying files from Snow Leopard to a mounted ext4 share via Samba, that's on a Fedora machine. Some files cannot be copied, and give this error: The operation can’t be completed because you don’t have permission to access some of the items. I've noticed that the files that can't be copied have an @ at the end of their permissions whien I do 'ls -l' in the command line. For example, I can copy the second file but not the first: -rwxrwxrwx@ 1 miles staff 1448 May 14 22:55 test.txt -rw-r--r-- 1 miles staff 136 Apr 5 17:06 image.psd.zip From what I've found, the @ means the file has 'additional properties'. Does anyone know how I can resolve this issue so I can copy the files to the fileshare?? Thanks!

    Read the article

  • Filezilla/Puttygen doesn't recognize private key file

    - by devzoner
    I have generated a key for an Ubuntu Virtual Machine running on Azure Cloud Services http://www.windowsazure.com/en-us/manage/linux/how-to-guides/ssh-into-linux/ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myPrivateKey.key -out myCert.pem When loading the private key into Filezilla, it asks me to convert the format, however, when converting the key it fails, the same happens with puttygen from linux console, using this: puttygen myPrivateKey.key -o myKey.ppk In both cases I have the following error: puttygen: error loading `myPrivateKey.key': unrecognised key type By the way, this key doesn't have a passphrase. I found an old thread about it, but I'm using 0.6.3 version which is newer than what this thread recommends: http://fixunix.com/ssh/541874-puttygen-unable-import-openssh-key.html I've managed to solve this issue by using another gui client Fugu for Mac, but one of my co-worker uses windows and I still have to figure this out. Since Filezilla is the de-facto ftp client, I thought it would be easier to solve it there. Thanks

    Read the article

  • Creating Java Neural Networks

    - by Tori Wieldt
    A new article on OTN/Java, titled “Neural Networks on the NetBeans Platform,” by Zoran Sevarac, reports on Neuroph Studio, an open source Java neural network development environment built on top of the NetBeans Platform. This article shows how to create Java neural networks for classification.From the article:“Neural networks are artificial intelligence (machine learning technology) suitable for ill-defined problems, such as recognition, prediction, classification, and control. This article shows how to create some Java neural networks for classification. Note that Neuroph Studio also has support for image recognition, text character recognition, and handwritten letter recognition...”“Neuroph Studio is a Java neural network development environment built on top of the NetBeans Platform and Neuroph Framework. It is an IDE-like environment customized for neural network development. Neuroph Studio is a GUI that sits on top of Neuroph Framework. Neuroph Framework is a full-featured Java framework that provides classes for building neural networks…”The author, Zoran Sevarac, is a teaching assistant at Belgrade University, Department for Software Engineering, and a researcher at the Laboratory for Artificial Intelligence at Belgrade University. He is also a member of GOAI Research Network. Through his research, he has been working on the development of a Java neural network framework, which was released as the open source project Neuroph.Brainy stuff. Read the article here.

    Read the article

  • Wireless Access Point - Can't ping other machines on the wireless network

    - by Surfer513
    I have a wireless access point (Netgear), and I have it setup so that it has an IP address in the current subnet (let's say 192.168.2.0, subnet mask of 255.255.255.0). The machine that it is connected to via ethernet cable has an IP in the same subnet as the AP. The machines that are connected to the AP via the wireless connection also have an IP address in the same subnet as the rest of the network (192.168.2.0). All machines can ping the access point, but they cannot ping each other. I don't totally understand why, because there is connection and all of the machines are in the same subnet. I realize this is a layer 3 device, but is there an issue because of this AP's lack of gateway capabilities? (i.e. no routing table, etc.)

    Read the article

  • Resizing an iscsi LUN on an EMC under Linux RHEL 5

    - by niXar
    I expanded a LUN on my SAN (from 30G to 50G), and tried all kind of voodoo but it looks like the system still believes the device is still small, despite showing me the new size alright. # blockdev --getsize64 /dev/sdl 53687091200 # dd if=/dev/sdl skip=$(( 31*1024*1024*1024/512 )) count=1|hexdump -C dd: reading `/dev/sdl': Input/output error The dd line works for blocks below the 30G original size. If I reboot, the problem goes away, but I don't want to since the machine has a dozen VMs running. I tried various magic, such as: iscsiadm -m node -R -I iface0 ... to no avail. Note: I'm not using multipath.

    Read the article

  • Setting MacBook timezone to UTC

    - by Andy A
    To run my web app, I need to set my timezone to UTC on my MacBook. I can do this temporarily by opening a Konsole and entering sudo ln -sf /usr/share/zoneinfo/UTC /etc/localtime However, my timezone returns to normal when I restart my machine! Any advice? Edit : The response to this question by 'Celada' implies that I can just make my Server UTC. I am using Apache Tomcat 7. Adding to Celada's response, how can I make it UTC?

    Read the article

  • Ethernet interface number changed, and old one does not exist, but does not leave IP address

    - by Sagar
    I have a virtual machine with Mandriva 2007.0 (yes, old - unfortunately we do not have a choice here). Anyway, the problem: Before reboot: active network interface = eth0. No other interfaces present, and network manager confirms this. Static IP address set to 172.31.2.22. No issues, everything working properly, routing et al. -------Reboot--------- After reboot: active network interface = eth1, with a DHCP address. Network manager shows eth0 as disconnected, and not connectable. When I try to set eth1 up with the static IP address (same one), it says "In Use". I then tried ifconfig eth0 172.31.2.29 just to free it up from the eth0 interface so I could use it with eth1 (since this is connected). Result: ifconfig eth0 172.31.2.29 SIOCSIFADDR: No such device eth0: unknown interface: No such device Nothing else changed. Any ideas what could be happening, or at least how I can get my IP address back?

    Read the article

  • CodePlex Daily Summary for Sunday, November 11, 2012

    CodePlex Daily Summary for Sunday, November 11, 2012Popular ReleasesZXMAK2: Version 2.7.2.0: show extended rzx error info fix reset lag for PROFI ULA 5.xx fix reset behavior fix PROFI ULA timings (thanks to solegstar) fix #FF port for PROFI ULA add ATM710 memory module add new predefined machine configs: ATM Turbo 2, PROFI 3.XX???????: Monitor 2012-11-11: This is the first releaseVidCoder: 1.4.5 Beta: Removed the old Advanced user interface and moved x264 preset/profile/tune there instead. The functionality is still available through editing the options string. Added ability to specify the H.264 level. Added ability to choose VidCoder's interface language. If you are interested in translating, we can get VidCoder in your language! Updated WPF text rendering to use the better Display mode. Updated HandBrake core to SVN 5045. Removed logic that forced the .m4v extension in certain ...ImageGlass: Version 1.5: http://i1214.photobucket.com/albums/cc483/phapsuxeko/ImageGlass/1.png v1.5.4401.3015 Thumbnail bar: Increase loading speed Thumbnail image with ratio Support personal customization: mouse up, mouse down, mouse hover, selected item... Scroll to show all items Image viewer Zoom by scroll, or selected rectangle Speed up loading Zoom to cursor point New background design and customization and others... v1.5.4430.483 Thumbnail bar: Auto move scroll bar to selected image Show / Hi...Building Windows 8 Apps with C# and XAML: Full Source Chapters 1 - 10 for Windows 8 Fix 002: This is the full source from all chapters of the book, compiled and tested on Windows 8 RTM. Includes: A fix for the Netflix example from Chapter 6 that was missing a service reference A fix for the ImageHelper issue (images were not being saved) - this was due to the buffer being inadequate and required streaming the writeable bitmap to a buffer first before encoding and savingmyCollections: Version 2.3.2.0: New in this version : Added TheGamesDB.net API for Games and NDS Added Support for Windows Media Center Added Support for myMovies Added Support for XBMC Added Support for Dune HD Added Support for Mede8er Added Support for WD HDTV Added Fast search options Added order by Artist/Album for music You can now create covers and background for games You can now update your ID3 tag with the info of myCollections Fixed several provider Performance improvement New Splash ...Draw: Draw 1.0: Drawing PadPlayer Framework by Microsoft: Player Framework for Windows 8 (v1.0): IMPORTANT: List of breaking changes from preview 7 Ability to move control panel or individual elements outside media player. more info... New Entertainment app theme for out of the box support for Windows 8 Entertainment app guidelines. more info... VSIX reference names shortened. Allows seeing plugin name from "Add Reference" dialog without resizing. FreeWheel SmartXML now supports new "Standard" event callback type. Other minor misc fixes and improvements ADDITIONAL DOWNLOADSSmo...WebSearch.Net: WebSearch.Net 3.1: WebSearch.Net is an open-source research platform that provides uniform data source access, data modeling, feature calculation, data mining, etc. It facilitates the experiments of web search researchers due to its high flexibility and extensibility. The platform can be used or extended by any language compatible for .Net 2 framework, from C# (recommended), VB.Net to C++ and Java. Thanks to the large coverage of knowledge in web search research, it is necessary to model the techniques and main...Umbraco CMS: Umbraco 4.10.0: NugetNuGet BlogRead the release blog post for 4.10.0. Whats newMVC support New request pipeline Many, many bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesWe have done all we can not to break backwards compatibility, but we had to do some minor breaking changes: Removed graphicHeadlineFormat config setting from umbracoSettings.config (an old relic from the 3.x days) U4-690 DynamicNode ChildrenAsList was fixed, altering it'...SQL Server Partitioned Table Framework: Partitioned Table Framework Release 1.0: SQL Server 2012 ReleaseSharePoint Manager 2013: SharePoint Manager 2013 Release ver 1.0.12.1106: SharePoint Manager 2013 Release (ver: 1.0.12.1106) is now ready for SharePoint 2013. The new version has an expanded view of the SharePoint object model and has been tested on SharePoint 2013 RTM. As a bonus, the new version is also available for SharePoint 2010 as a separate download.D3D9Client: D3D9Client R7: New release for Orbiter 2010-P1 - Added horizon/sun angle for night-lights into the configuration file (default 10deg) - Some runway lights related bugs are fixed - Added more configuration options for runway lightsFiskalizacija za developere: FiskalizacijaDev 1.2: Verzija 1.2. je, prije svega, odgovor na novu verziju Tehnicke specifikacije (v1.1.) koja je objavljena prije nekoliko dana. Pored novosti vezanih uz (sitne) izmjene u spomenutoj novoj verziji Tehnicke dokumentacije, projekt smo prošili sa nekim dodatnim feature-ima od kojih je vecina proizašla iz vaših prijedloga - hvala :) Novosti u v1.2. su: - Neusuglašenost zahtjeva (http://fiskalizacija.codeplex.com/workitem/645) - Sample projekt - iznosi se množe sa 100 (http://fiskalizacija.codeplex.c...MFCMAPI: October 2012 Release: Build: 15.0.0.1036 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeDictationTool: DictationCool-WPF: • Open a media file to start a new dication. • Open a dct file to continue a dictation. • Compare your dictation with original text if exists. • Save your dictation to dct file, and restore it to continue later. • Save the compared result to html file.MCEBuddy 2.x: MCEBuddy 2.3.7: Changelog for 2.3.7 (32bit and 64bit) 1. Improved performance of MP4 Fast and M4V Fast Profiles (no deinterlacing, removed --decomb) 2. Improved priority handling 3. Added support for Pausing and Resume conversions 4. Added support for fallback to source directory if network destination directory is unavailable 5. MCEBuddy now installs ShowAnalyzer during installation 6. Added support for long description atom in iTunesDyanamic Reports (RDLC) - SharePoint 2010 Visual WebPart: Initial Release: This is a Initial Release.HTML Renderer: HTML Renderer 1.0.0.0 (3): Major performance improvement (http://theartofdev.wordpress.com/2012/10/25/how-i-optimized-html-renderer-and-fell-in-love-with-vs-profiler/) Minor fixes raised in issue tracker and discussions.Window Manager: Window Manager 1.0: First releaseNew Projectsarteytex: este es una prueba blockworld: An implementation of a goal stack planner.Customer Note: customer note is windows store applicationDraw: ?????????????:??????、CAD??、????。Football Management: Football Management System is web management system for football (soccer) leagues, teams and players. Hijri Converter API: This project is aimed to create a simple RESTful API using VB and ASP.NET to do Hijri-to-Gregorian and Gregorian-to-Hijri conversion.httpclient?????????: httpclient?????????(1)??????????(2)?????????(3)??2012-11-06??,???????。 Imagine Cup 2013: Develop project to Imagine Cup 2013MyAppReji: MyAppN2F Request: The N2F Request object is used to handle interactions between N2F and the global $_REQUEST variable, sanitizing any results which are returned.Orchard Metro Theme: Orchard Metro Theme is a clean and flexible multi-zone theme.Poker Clock And Goodies: poker w8ProjectASPReviewer: Review website for notebooks, tablets and smartphones.Prototype: Its about making an proto type for the final project.Prototype - 7COM0207: 7COM0207 web scripting module, Assignment 2QuickToAD: QuickToAD is a foundational development project for the purpose of jump-starting data-driven application projects.Release Manager: Release Manager is a project to design and develop Windows based Release Management Software.ResW File Code Generator: A Visual Studio 2012 Custom Tool for generating a strongly typed helper class for accessing localized resources from a .ResW file.SEO Tools: This is a website containing some commonly used SEO tools. I have only added a blog ping utility at this time but there is more to come. Thales communicator: A C# library that helps communicate with Thales HSMTrivial: A trivia framework: Trivial is a C# framework that helps you creating custom trivia-like applications.

    Read the article

  • Blocking RDP connections from secondary IP addresses

    - by user73995
    Hi, I have a webserver running Windows 2008 R2 with multiple IPs assigned to it. I want to be able to RDP to the box via the main IP assigned to the box but not from any of the secondary IPs (ie... the websites IPs). I assume if I had control of the Firewall I could shut off the RDP port to the secondary IPs but it's in a hosted environment and I don't have access to the firewall. I'll contact the host to see if that's available but was hoping there was a setting I could set on the machine itself to not resolve RDP requests from the additional IPs. Any advice?

    Read the article

  • My View on ASP.NET Web Forms versus MVC

    - by Ricardo Peres
    Introduction A lot has been said on Web Forms and MVC, but since I was recently asked about my opinion on the subject, here it is. First, I have to say that I really like both technologies and I don’t think any is going away – just remember SharePoint, which is built on top of Web Forms. I see them as complementary, targeting different needs and leveraging different skills. Let’s go through some of their differences. Rapid Application Development Rapid Application Development (RAD) is the development process by which you have an Integrated Development Environment (IDE), a visual design surface and a toolbox, and you drag components from the toolbox to the design surface and set their properties through a property inspector. It was introduced with some of the earliest Windows graphical IDEs such as Visual Basic and Delphi. With Web Forms you have RAD out of the box. Visual Studio offers a generally good (and extensible) designer for the layout of pages and web user controls. Designing a page may simply be about dragging controls from the toolbox, setting their properties and wiring up some events to event handlers, which are implemented in code behind .NET classes. Most people will be familiar with this kind of development and enjoy it. You can see what you are doing from the beginning. MVC also has designable pages – called views in MVC terminology – the problem is that they can be built using different technologies, some of which, at the moment (MVC 4) do not support RAD – Razor, for example. I believe it is just a matter of time for that to be implemented in Visual Studio, but it will mostly consist on HTML editing, and until that day comes, you have to live with source editing. Development Model Web Forms features the same development model that you are used to from Windows Forms and other similar technologies: events fired by controls and automatic persistence of their properties between postbacks. For that, it uses concepts such as view state, which some may love and others may hate, because it may be misused quite easily, but otherwise does its job well. Another fundamental concept is data binding, by which a collection of data can be fed to a control and have it render that data somehow – just thing of the GridView control. The focus is on the page, that’s where it all starts, and you can place everything in the same code behind class: data access, business logic, layout, etc. The controls take care of generating a great part of the HTML and JavaScript for you. With MVC there is no free lunch when it comes to data persistence between requests, you have to implement it yourself. As for event handling, that is at the core of MVC, in the form of controllers and action methods, you just don’t think of them as event handlers. In MVC you need to think more in HTTP terms, so action methods such as POST and GET are relevant to you, and may write actions to handle one or the other. Also of crucial importance is model binding: the way by which MVC converts your posted data into a .NET class. This is something that ASP.NET 4.5 Web Forms has introduced as well, but it is a cornerstone in MVC. MVC also has built-in validation of these .NET classes, which out of the box uses the Data Annotations API. You have full control of the generated HTML - except for that coming from the helper methods, usually small fragments - which requires a greater familiarity with the specifications. You normally rely much more on JavaScript APIs, they are even included in the Visual Studio template, that is because much less is done for you. Reuse It is difficult to accept a professional company/project that does not employ reuse. It can save a lot of time thus cutting costs significantly. Code reused in several projects matures as time goes by and helps developers learn from past experiences. ASP.NET Web Forms was built with reuse in mind, in the form of controls. Controls encapsulate functionality and are generally portable from project to project (with the notable exception of web user controls, those with an associated .ASCX markup file). ASP.NET has dozens of controls and it is very easy to develop new ones, so I believe this is a great advantage. A control can inject JavaScript code and external references as well as generate HTML an CSS. MVC on the other hand does not use controls – it is possible to use them, with some view engines like ASPX, but it is just not advisable because it breaks the flow – where do Init, Load, PreRender, etc, fit? The most similar to controls is extension methods, or helpers. They serve the same purpose – generating HTML, CSS or JavaScript – and can be reused between different projects. What differentiates them from controls is that there is no inheritance and no context – an extension method is just a static method which doesn’t know where it is being called. You also have partial views, which you can reuse in the same project, but there is no inheritance as well. This, in my view, is a weakness of MVC. Architecture Both technologies are highly extensible. I have writtenstarted writing a series of posts on ASP.NET Web Forms extensibility and will probably write another series on MVC extensibility as well. A number of scenarios are covered in any of these models, and some extensibility points apply to both, because, of course both stand upon ASP.NET. With Web Forms, if you’re like me, you start by defining you master pages, pages and controls, with some helper classes to glue everything. You may as well throw in some JavaScript, but probably you’re main work will be with plain old .NET code. The controls you define have the chance to inject JavaScript code and references, through either the ScriptManager or the page’s ClientScript object, as well as generating HTML and CSS code. The master page and page model with code behind classes offer a number of “hooks” by which you can change the normal way of things, for example, in a page you can access any control on the master page, add script or stylesheet references to its head and even change the page’s title. Also, with Web Forms, you typically have URLs in the form “/SomePath/SomePage.aspx?SomeParameter=SomeValue”, which isn’t really SEO friendly, no to mention the HTML that some controls produce, far from standards, optimization and best practices. In MVC, you also normally start by defining the master page (or layout) and views, which are the visible parts, and then define controllers on separate files. These controllers do not know anything about the views, except the names and types of the parameters that will be passed to and from them. The controller will be responsible for the data access and business logic, eventually relying on additional classes for this purpose. On a controller you only receive parameters and return a result, which may be a request for the rendering of a view, a redirection to another URL or a JSON object, to name just a few. The controller class does not know anything about the web, so you can effectively reuse it in a non-web project. This separation and the lack of programmatic access to the UI elements, makes it very difficult to implement, for example, something like SharePoint with MVC. OK, I know about Orchard, but it isn’t really a general purpose development framework, but instead, a CMS that happens to use MVC. Not having controls render HTML for you gives you in turn much more control over it – it is your responsibility to create it, which you can either consider a blessing or a curse, in the later case, you probably shouldn’t be using MVC at all. Also MVC URLs tend to be much more SEO-oriented, if you design your controllers and actions properly. Testing In a well defined architecture, you should separate business logic, data access logic and presentation logic, because these are all different things and it might even be the need to switch one implementation for another: for example, you might design a system which includes a data access layer, a business logic layer and two presentation layers, one on top of ASP.NET and the other with WPF; and the data access layer might be implemented first using NHibernate and later on switched for Entity Framework Code First. These changes are not that rare, so care should be taken in designing the system to make them possible. Web Forms are difficult to test, because it relies on event handlers which are only fired in web contexts, when a form is submitted or a page is requested. You can call them with reflection, but you have to set up a number of mocking objects first, HttpContext.Current first coming to my mind. MVC, on the other hand, makes testing controllers a breeze, so much that it even includes a template option for generating boilerplate unit test classes up from start. A well designed – from the unit test point of view - controller will receive everything it needs to work as parameters to its action methods, so you can pass whatever values you need very easily. That doesn’t mean, of course, that everything can be tested: views, for instance, are difficult to test without actually accessing the site, but MVC offers the possibility to compile views at build time, so that, at least, you know you don’t have syntax errors beforehand. Myths Some popular but unfounded myths around MVC include: You cannot use controls in MVC: not true, actually, you can, at least with the Web Forms (ASPX) view engine; the declaration and usage is exactly the same as with Web Forms; You cannot specify a base class for a view: with the ASPX view engine you can use the Inherits Page directive, with this and all the others you can use the pageBaseType and userControlBaseType attributes of the <page> element; MVC shields you from doing “bad things” on your views: well, you can place any code on a code block, at least with the ASPX view engine (you may be starting to see a pattern here), even data access code; The model is the entity model, tied to an O/RM: the model is actually any class that you use to pass values to a view, including (but generally not recommended) an entity model; Unit tests come with no cost: unit tests generally don’t cover the UI, although there are frameworks just for that (see WatiN, for example); also, for some tests, you will have to mock or replace either the HttpContext.Current property or the HttpContextBase class yourself; Everything is testable: views aren’t, without accessing the site; MVC relies on HTML5/some_cool_new_javascript_framework: there is no relation whatsoever, MVC renders whatever you want it to render and does not require any framework to be present. The thing is, the subsequent releases of MVC happened in a time when Microsoft has become much more involved in standards, so the files and technologies included in the Visual Studio templates reflect this, and it just happens to work well with jQuery, for example. Conclusion Well, this is how I see it. Some folks may think that I am being too rude on MVC, probably because I don’t like it, but that’s not true: like I said, I do like MVC and I am starting my new projects with it. I just don’t want to go along with that those that say that MVC is much superior to Web Forms, in fact, some things you can do much more easily with Web Forms than with MVC. I will be more than happy to hear what you think on this!

    Read the article

  • How to list my harddrives in OpenSolaris?

    - by Sanoj
    I have a machine with two harddrives. I have installed OpenSolaris on one of them and now I want to add the other one as a mirror-drive in my zpool. But to do that I have to know the name of my drive, something like c7d0s0. Using the command zpool status can I see the harddrives already in use in my zpool but not the hard-drives that are not in use. How can I list the names of the hard-drives that are not in use in any zpool or list the name of all my hard-drives?

    Read the article

< Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >