Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 868/956 | < Previous Page | 864 865 866 867 868 869 870 871 872 873 874 875  | Next Page >

  • Nginx https rewrite turns POST to GET

    - by x7311
    My proxy server runs on ip A and this is how people access my web service. The nginx configuration will redirect to a virtual machine on ip B. For the proxy server on IP A, I have this in my sites-available server { listen 443; ssl on; ssl_certificate nginx.pem; ssl_certificate_key nginx.key; client_max_body_size 200M; server_name localhost 127.0.0.1; server_name_in_redirect off; location / { proxy_pass http://10.10.0.59:80; proxy_redirect http://10.10.0.59:80/ /; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } server { listen 80; rewrite ^(.*) https://$http_host$1 permanent; server_name localhost 127.0.0.1; server_name_in_redirect off; location / { proxy_pass http://10.10.0.59:80; proxy_redirect http://10.10.0.59:80/ /; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } The proxy_redirect was taken from how do I get nginx to forward HTTP POST requests via rewrite? Everything that hits the public IP will hit 443 because of the rewrite. Internally, we are forwarding to 80 on the virtual machine. But when I run a python script such as the one below to test our configuration import requests data = {'username': '....', 'password': '.....'} url = 'http://IP_A/api/service/signup' res = requests.post(url, data=data, verify=False) print res print res.json print res.status_code print res.headers I am getting a 405 Method Not Allowed. In nginx we found that when it hit the internal server, the internal nginx was getting a GET request, even though in the original header we did a POST (this was shown in the Python script). So it seems like rewrite has problem. Any idea how to fix this? When I commented out the rewrite, it hits 80 for sure, and it went through. Since rewrite was able to talk to our internal server, so rewrite itself has no issue. It's just the rewrite dropped POST to GET. Thank you! (This will also be asked on Nginx forum because this is a critical blocker...)

    Read the article

  • What DNS server to use for dynamic load-balancing of website?

    - by Marki555
    I will have 2 servers in different datacenters (different countries) and I want to use DNS load-balancing mainly for High Availability of website hosted on those 2 servers. It is just ad tracking site, which records hit in local database and returns few lines on html code. I want to return 2 A records each time because of DNS pinning in browsers (if one server fails, browser will try second A record which it has already cached). Both servers will be acting also as DNS servers for redundancy. Now comes my proposed solution: I will use BIND and have both servers as a master for that zone. On each server there will be running script, which will periodically test availability (http) of both servers and remove IP from DNS in case of failure. Now the questions :) 1) Is BIND suitable for this solution? I think BIND performance is good and it is easy to manipulate the zone file via script. And as I will modify the zone only in case of failure/maintenance, the modifications (and thus bind reload) won't be often. 2) I plan to use TTL of 5 minutes. The website will have about 1000-3000 req/s but from distinct clients (each IP only 1-3 requests), so I think the DNS load won't be too much. I suppose their ISPs will cache the responses for those 5 mins. Is there any reason to lower the TTL even more? 3) Is my master-master approach good? Or should I make one of the servers master and the other one slave? Right now each server can monitor both itself and the other one. If only webservice fails, both DNS nodes will notice it. If the whole server fails, then the remaining DNS node will notice it and the failed node will not answer DNS queries anyway. 4) Is it a big issue when one NS server does not respond to queries? If yes, I can make a third DNS, so anytime at least 2 of them would accept queries... 5) Should I rewrite the zone file via script, or just use dynamic DNS update (for example via nsupdateutility)?

    Read the article

  • Beginner server local installation

    - by joanjgm
    Here's the thing I own a small business and currently my emails are being managed by some regular hosting using cpanel and that I bought a small server and installed windows server and exchange Can you tell what I did wrong here Installed and configured my current existing domain Configured all email address Installed noip in case my public address change In the cpanel of the domain I've added an MX record to the noip domain of the server with priority 0 so now emails are being received by my own server Now whenever I send an email to anyone gmail hotmail etc I get a response that cannot be delivered since may be junk This didn't happen when I sent emails from the hosting What's missing what did I do wrong heres the code mx.google.com rejected your message to the following e-mail addresses: Joan J. Guerra Makaren ([email protected]) mx.google.com gave this error: [186.88.202.13 12] Our system has detected that this message is likely unsolicited mail. To reduce the amount of spam sent to Gmail, this message has been blocked. Please visit http://support.google.com/mail/bin/answer.py?hl=en&answer=188131 for more information. cn9si815432vcb.71 - gsmtp Your message wasn't delivered due to a permission or security issue. It may have been rejected by a moderator, the address may only accept e-mail from certain senders, or another restriction may be preventing delivery. Diagnostic information for administrators: Generating server: SERVERMEGA.megaconstrucciones.com.ve [email protected] mx.google.com #550-5.7.1 [186.88.202.13 12] Our system has detected that this message is 550-5.7.1 likely unsolicited mail. To reduce the amount of spam sent to Gmail, 550-5.7.1 this message has been blocked. Please visit 550-5.7.1 http://support.google.com/mail/bin/answer.py?hl=en&answer=188131 for 550 5.7.1 more information. cn9si815432vcb.71 - gsmtp ## Original message headers: Received: from SERVERMEGA.megaconstrucciones.com.ve ([fe80::9096:e9c2:405b:6112]) by SERVERMEGA.megaconstrucciones.com.ve ([fe80::9096:e9c2:405b:6112%10]) with mapi; Thu, 29 May 2014 11:32:19 -0430 From: prueba <[email protected]> To: "Joan J. Guerra Makaren" <[email protected]> Subject: Probando correos Thread-Topic: Probando correos Thread-Index: Ac97V1eW4OBFmoqJTRGoD7IPTC2azg== Date: Thu, 29 May 2014 16:04:35 +0000 Message-ID: <[email protected]> Accept-Language: en-US, es-VE Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: Content-Type: multipart/alternative; boundary="_000_000f42494487966276f7b241megaconstruccionescomve_" MIME-Version: 1.0

    Read the article

  • VLAN issues between linux kernels 2.6 / 3.3 in an ESX / Cisco environment

    - by David Griffith
    I shall attempt to explain an issue I have encountered - I have a VM running on esx 4.1 with an interface connected to VLAN800 via an access port on a cisco 3750. It runs linux - kernel 2.6.24, and has about 5 to 10 Mbit of chatter on 10.10.0.0/16 and various multicast addresses to look after. I needed to isolate certain devices from certain other devices on the network, with all of them having to talk to that one VM. No, the address space can't be separated, nor can the networks be easily vlan'd apart. The software on the VM listens to one interface only. Private vlans appear to be the way to go. So as a test, I built a bridge on the VM that globs together the vlans as needed. All good, everything works as expected. But occasionally (sigh) there's some latency that trips up a couple of profinet devices on the network because, you know, you're not really supposed to trunk real-time protocols around the place willy-nilly. I shift it to our test/backup server - works nicely, but I don't want it to be running on the test server as we muck around with that a lot. So I says to myself, "I'll put it on a new VM for testing and tweaking." I download a small linux distro with kernel 3.3, and install as a new VM with a the vlans as separate interfaces for testing. I power up the testing VM - ok. I bring up all the separate interfaces - ok. I can ping the production VM, see all sorts of traffic going past with tshark, etc. I build a bridge and put the primary vlan on it - the production VM running 2.6 immediately loses its multicast traffic - Unicast is fine. (?) I shut down the bridge - still no multicast traffic (!?) I power-cycle the production VM(!?!?) - multicast traffic returns. I trunk everything into the testing VM and create vlan interfaces under linux instead - same result, as soon as I start the bridge.... no multicast on the production VM. Ok, so I take a break and leave things alone. I decide to play with a couple of ubiquiti bullet radios - I'm testing various firmware as a side project. I flash a radio with Open-wrt-12.09. I enable a trunk on a port on a cisco on our network so I can muck around with multiple vlans and SSIDs I power up the radio and connect - ok. I create a vlan interface from the trunk.... the same vlan as the production VM wayyyyy over there, three cisco routers away. Ok. I bridge the vlan interface to the wifi interface and immediately get a phone call. The production VM has (suprise!) lost its multicast traffic. Again, nothing comes back until I power-cycle the VM. What the hell is going on?

    Read the article

  • How to install RAID drivers on already installed Windows 7?

    - by happysencha
    64-bit Windows 7 Ultimate 6GB RAM Intel i7 920 Intel X25-M SSD 80GB 2,5" Club 3D Radeon HD5750 GA-EX58-UD4P Motherboard I've been running fine with Windows 7 installed on the SSD. I wanted to create an mirrored Raid-1 setup for backups using two hard disks, so I ordered two Samsung HD203WI. This motherboard supports two different RAID controllers, the Intel's ICH10R and Gigabyte's SATA2 SATA controller. There are 6 SATA ports behind the ICH10R and 2 SATA ports for the Gigabyte controller. I googled around and seemed that the ICH10R is a better choice and since then I've been trying to make it work. When I activate the [RAID] mode from BIOS, the Windows 7 gives BSOD exactly as described by this guy: "Windows 7 will start to boot, it gets to the screen where there are 4 colors coming together and it blue screens and restarts no matter what I do." First thing I did: turned off the RAID and booted to Windows and tried to install the SATA RAID drivers from Gigabyte. I launch the driver installation program and it gives "This computer does not meet the minimum requirements for installing the software" error. I then tried Intel's Rapid Storage Technology drivers (which apparently is the same as the one offered at Gigabyte's site), but it resulted in exactly the same error. I then detached the new Samsung hard disks from the SATA ports, but left the [RAID] enabled in BIOS. To my surprise, it still BSOD'd, so at this point I knew it is an OS/driver issue. Also, I tried with the Gigabyte's RAID enabled (while the ICH10R RAID disabled) and it booted just fine. So then I thought, that maybe I can't install the RAID drivers from within the OS. So I caused the BSOD on purpose once again, and then with ICH10R RAID activated and Samsung hard disks attached, I choose the Windows 7 Recovery mode in the boot menu. It sees some problem(s), tries to repair, does not succeed and does not ask for drivers (which I put on a USB stick) to install. I also tried to use the command-line in the recovery: "rundll32 syssetup, SetupInfObjectInstallAction DefaultInstall 128 iaStor.inf" but it gave "Installation failed." So I'm clueless how should I proceed. Do I really need to re-install Windows 7 and load RAID drivers in the Win7 setup? I don't want to install any OS on the RAID, the Windows 7 is and will be on the SSD. I just want to have a RAID-1 backup using those two hard disks. I mean why would I need to re-install operating system to add RAID setup?

    Read the article

  • Nginx SSL redirect for one specific page only

    - by jjiceman
    I read and followed this question in order to configure nginx to force SSL for one page (admin.php for XenForo), and it is working well for a few of the site administrators but is not for myself. I was wondering if anyone has any advice on how to improve this configuration: ... ssl_certificate example.net.crt; ssl_certificate_key example.key; server { listen 80 default; listen 443 ssl; server_name www.example.net example.net; access_log /srv/www/example.net/logs/access.log; error_log /srv/www/example.net/logs/error.log; root /srv/www/example.net/public_html; index index.php index.html; location / { if ( $scheme = https ){ rewrite ^ http://example.net$request_uri? permanent; } try_files $uri $uri/ /index.php?$uri&$args; index index.php index.html; } location ^~ /admin.php { if ( $scheme = http ) { rewrite ^ https://example.net$request_uri? permanent; } try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } } ... It seems that the extra information in the location ^~ /admin.php block is unecessary, does anyone know of an easy way to avoid duplicate code? Without it it skips the php block and just returns the php files. Currently it applies https correctly in Firefox when I navigate to admin.php. In Chrome, it downloads the admin.php page. When returning to the non-https website in Firefox, it does not correctly return to http but stays as SSL. Like I said earlier, this only happens for me, the other admins can go back and forth without a problem. Is this an issue on my end that I can fix? And does anyone know of any ways I could reduce duplicate configuration options in the configuration? Thanks in advance! EDIT: Clearing the cache / cookies seemed to work. Is this the right way to do http/https redirection? I sort of made it up as I went along.

    Read the article

  • BlueScreens on my ThinkPad with Windows 7 64 Bit and a SSD (CRITICAL_OBJECT_TERMINATION, ntoskernel.exe)

    - by pvorb
    I'm getting BlueScreens about every five days for more than three months. Here's an example: A problem has been detected and Windows has been shut down to prevent damage to your computer. The problem seems to be caused by the following file: ntoskrnl.exe CRITICAL_OBJECT_TERMINATION If this is the first time you've seen this stop error screen, restart your computer. If this screen appears again, follow these steps: Check to make sure any new hardware or software is properly installed. If this is a new installation, ask your hardware or software manufacturer for any Windows updates you might need. If problems continue, disable or remove any newly installed hardware or software. Disable BIOS memory options such as caching or shadowing. If you need to use safe mode to remove or disable components, restart your computer, press F8 to select Advanced Startup Options, and then select Safe Mode. Technical Information: *** STOP: 0x000000f4 (0x0000000000000003, 0xfffffa80065f2b30, 0xfffffa80065f2e10, 0xfffff80002f9bf40) *** ntoskrnl.exe - Address 0xfffff80002c98d00 base at 0xfffff80002c19000 DateStamp 0x4d9fdd5b It's has always been the same BlueScreen message showing CRITICAL_OBJECT_TERMINATION, 0x000000f4, and ntoskrnl.exe. Of course the addresses change. My computer is a ThinkPad T400 (about 2 years old) with a SSD in it. I'm also running Windows 7 Professional 64 bit. When I bought my computer, it had a 250GByte SeaGate HDD in it, which I replaced by a 500GByte HDD by Western Digital. Last september I bought a Corsair F120 SSD and replaced the HDD by this SSD. Then I bought a LEICKE HDD adapter for the UltraBay II where I plugged in my 500GByte HDD. This configuration ran about half a year without any errors. After re-installing Windows this spring, I am getting regular BlueScreens. Sometimes my system runs for about 2 weeks without a BSOD, sometimes I get several BlueScreens a day. The only thing that I noticed is, that I'm always running Google Chrome when it happens. Is there anyone who has made his/her own bad experiences whith some of my components or is there anybody who can tell me if it would be helpful to send my notebook to Lenovo? Thank you very much for your help on my issue! Regards, Paul

    Read the article

  • Detection of battery status totally messed up

    - by Faabiioo
    I already posted this question in the Ubuntu forum and stackOverflow. I forward it here with the hope to find some different opinions about the problem. I have an Acer TravelMate 5730, which is 3 y.o., running Ubuntu 10.04 LTS. One year ago I changed the battery because the old one died. Since then, everything worked like a charm. A week ago I was using my laptop running on battery; it was charged up to 60%. Suddenly it shut down and for about 24h it was like the battery was totally broken: it didn't charge anymore and the 'upower --dump' said state: critical. I was kind of resigned to buy a new battery, when suddenly the orange light became green: battery was charged and actually working; strangely the battery indicator was stuck to 100%, even after 2 hours running. I tried again with 'upower --dump' or 'acpi -b' commands and it kept saying battery is discharging, though maintaining the percentage to 100%. Thus, battery working fine up to 3 hours, without any warning when it was almost empty, likely to result in a brute shut down. Today something different. the 'upower --dump' command says: ... present: yes rechargeable: yes state: fully-charged energy: 0 Wh energy-empty: 0 Wh energy-full: 65.12 Wh energy-full-design: 65.12 Wh energy-rate: 0 W voltage: 14.481 V percentage: 0% capacity: 100% technology: lithium-ion I tried to boot WinXP and the problem is pretty much the same, with the battery fully-charged, percentage equal to 0% and no way to fix it. While writing, the situation has changed again: present: yes rechargeable: yes state: charging energy: 0 Wh energy-empty: 0 Wh energy-full: 65.12 Wh energy-full-design: 65.12 Wh energy-rate: 0 W voltage: 14.474 V percentage: 0% capacity: 100% technology: lithium-ion ...charging, but it does not charge up. (Recall, the battery lasted 3 hours until yesterday!). So, the big question is: is it an hardware issue, like a dedicated internal circuit is broken? or maybe it is just the battery that must be changed. Or, rather, some BIOS problem that could be fixed in some way. I'd appreciate every help that can shed some light on this annoying problem thanks

    Read the article

  • netbook intel GMA 3150 external monitor 1920x1080 flicker problem

    - by seyenne
    Dear all, i recently purchased a acer netbook (apsire one d260). It runs flawlessly. Yesterday I bought a samsung 23" TFT with a native resolution of 1920x1080. According to the information found in the internet and my local computer dealer, the intel chipset can handle the native resolution of the monitor. However, this is only partly the case. I use the VGA cabel to connect, the monitor instantly switches to the native resolution and now the problem: Occasionaly, especially the first 2 hours after booting up, I have a flickering all over the screen, sometimes the entire screen is shaking and spinning around like crazy. I figured out, that lowering the resolution avoids the flicker but this helps only for some time. I can rule out that it's the monitor's problem since I found no issues with another notebook. Right now, I have no problems with the netbook, for about 30 minutes I didn't experience any issues... But I don't know for how long, it occurs without warning :-) I'm worried that if I would bring the netbook back to the dealer and explain my problem, after testing it on an external screen in the local shop, everything works just fine... And I won't get helped with the problem because I can't prove it. (I'm currently in Thailand and over here, customer service is nothing like back home in Germany) What can I do? Is this a driver related issue? (I installed the latest GPU driver) Is it because of the VGA cable? (But why does it work sometimes without any problems and with no issues on the other notebook) I monitored the GPU/CPU temperature, nothing changes really over time..Can it simply be a faulty GPU and is a replacement justifiable? I'm really stressed now because for the time I'm writing, the flickering didn't occur...but for sure, soon or later it will happen again.. I Forgot to mention, the problem also happens if the netbook runs on battery, unplugged. So the only hardware that is plugged is the TFT screen. ...........and here it comes again, flickering has just begun. NEED HELP! Thank you all for reading through this and giving any suggestions if possible. Cheers

    Read the article

  • Incremental RPM package version "numbers" for x.y.z > x.y.z-beta (or alpha, rc, etc)

    - by Jonathan Clarke
    In order to publish RPM packages of several different versions of some software, I'm looking for a way to specify version "numbers" that are considered "upgrades", and include the differentiation of several pre-release versions, such as (in order): "2.4.0 alpha 1", "2.4.0 alpha 2", "2.4.0 alpha 3", "2.4.0 beta 1", "2.4.0 beta 2", "2.4.0 release candidate", "2.4.0 final", "2.4.1", "2.4.2", etc. The main issue I have with this is that RPM considers that "2.4.0" comes earlier than "2.4.0.alpha1", so I can't just add the suffix on the end of the final version number. I could try "2.4.0.alpha1", "2.4.0.beta1", "2.4.0.final", which would work, except for the "release candidate" that would be considered later than "2.4.0.final". An alternative I considered is using the "epoch:" section of the RPM version number (the epoch: prefix is considered before the main version number so that "1:2.4.0" is actually earlier than "2:1.0.0"). By putting a timestamp in the epoch: field, all the versions get ordered as expected by RPM, because their versions appear to increment in time. However, this fails when new releases are made on several major versions at the same time (for example, 2.3.2 is released after 2.4.0, but their version for RPM are "20121003:2.3.2" and "20120928:2.4.0" and systems on 2.3.2 can't get "upgraded" to 2.4.0, because rpm sees it as an older version). In this case, yum/zypper/etc refuse to upgrade to 2.4.0, thus my problem. What version numbers can I use to achieve this, and make sure that RPM always considers the version numbers to be in order. Or if not version numbers, other mechanism in RPM packaging? Note 1: I would like to keep the "Release:" field of the spec file for it's original purpose (several releases of packages, including packaging changes, for the same version of the packaged software). Note 2: This should work on current production versions of major distributions, such as RHEL/CentOS 6 and SLES 11. But I'm interested in solutions that don't, too, so long as they don't involve recompiling rpm! Note 3: On Debian-like systems, dpkg uses a special component in the version number which is the "~" (tilde) character. This causes dpkg to count the suffix as "negative" ordering, so that "2.4.0~anything" will come before "2.4.0". Then, normal ordering applies after the "~", so "2.4.0~alpha1" comes before "2.4.0~beta1" because "alpha" comes before "beta" alphabetically. I'm not necessarily looking to use the same scheme for RPM packages (I'm pretty sure no such equivalent exists), so this is just FYI.

    Read the article

  • Windows 7 immediately disconnects a USB drive

    - by Daniel Saner
    I am having a problem with Windows 7 x64 consistently disconnecting one specific USB mass storage drive immediately after it is connected. The drive in question is a Cowon C2 digital music player which works in standard mass storage controller mode (i.e. no device-specific drivers needed/available). When I connect the player, Windows plays the "USB connect" sound and the device appears (under its correct name) in the device manager, but it never appears as a drive. The player itself displays "USB Connected" for a split-second before reporting that it has been disconnected again. Since the player, by design, reboots after it has been disconnected, Windows plays the "USB disconnect" sound before restarting the whole cycle once the player has powered back on. I am connecting the player through an Intel X79 Chipset motherboard (Gigabyte GA-X79-UD3) to Windows 7 Pro 64-bit. The player used to work fine the first few times I connected it, showing up as an external drive; it only recently stopped working. It is not a problem with the player, since it works fine when connected to another computer, even such running the exact same operating system. It is also not a problem with the USB controller, since the issue is the same on both the Intel USB 2.0 and the Fresco Logic FL1009 USB 3.0 controller ports. I have also not had the problem with any other drive so far. Among the things I have tried so far: Disabling USB legacy mode in BIOS Disabling energy-saving power down for all USB controllers in Windows' device manager Removing and reinstalling Windows' USB mass storage driver Removing and reinstalling Intel and Fresco Logic USB controller driver Restoring the player to factory defaults None of these made a difference. Again, the player used to work fine on the exact same system just days ago; I didn't install any new hardware or drivers on it since then. I would be very grateful for any hints on what else to try. Edit: Here is another new hint; I found out that when I connect the drive before booting Windows, it is available in Windows Explorer as it should, and does not automatically disconnect. If I remove and reconnect it though, the infinite connect/disconnect-loop starts anew.

    Read the article

  • Internet Pings but Does Not Load

    - by t3techcom18
    From what I've been seeing and been doing my research for the past two days, many people have been having the same issues throughout the years, however, this is the first time I've encountered this issue and many of the specific workarounds or fixes have not worked for me. I've been trying to work through this for 24 hours straight now, but to no avail so many thanks to those that can help. On Monday night, got home from work; surfing the internet for half an hour, everything was fine as always. Just after half an hour, my Internet got very sluggish and then it died completely. I thought it might have been the an update I just put through in terms of Windows Update that said was a critical update for MSE, as the same thing happened a few years ago. I did a System Restore to two different dates that were in the past two weeks, nothing. Uninstalled MSE and disabled Windows Defender and the Windows Firewall: Nothing. Reset IE Options, Reset Winsock, Dumping DNS, many of the other command prompt screens to reset items: Nothing. Reset the modem: Nothing. What DID work, however, was a ping test to Yahoo. The ping test worked, saying all four packets was recieved, yet nothing else popped up. LAN and CenturyLink said everything worked on their end and that everything was connected properly, as well as the speeds working fine. CenturyLink said in their notes that they thought Port 80 was blocked. I went and put in the Firewall to allow Port 80 but it didn't make any difference whatsoever. I remembered I had a spare modem laying around and I switched them up, both modem and the cords - nothing. I then hooked it up to my netbook to see if that would work, as it usually does - connection didn't work there either. Like I said, it's been about 24 hours now and this is increasingly frustrating, as I've tried all solutions (While browsing through 10 search results pages on my phone) suggested and still nothing. Any suggestions and tricks would be greatly appreciated! Here's my specs: Windows 7 32-bit Home Premium Intel Core 2 Duo 3.14 Ghz 4 GB Kingston DDR2 RAM eVGA nForce 750i SLI eVGA GeForce GTX 560 Ti FPB ISP: CenturyLink No router Modem: CenturyLink 660 Series Hardwired connection PLEASE NOTE: This is the only computer I have (Like I said, the netbook solution didn't work), so downloading programs and such is not an option til I get to other computers somewhere else, like right now. Unless someone knows of a way of copying/pasting a file in Windows and then transferring said info to an Android smartphone, this is gunna take a while haha. Patience is requested.

    Read the article

  • Windows Update and IE fail to connect, but Chrome fine?

    - by I Gottlieb
    Out of ideas on this one. (Running Windows Vista.) I have a program that accesses the internet to retrieve financial market data. One day it tells me that it can't log in -- timeout error. I check the documentation and it says must have a working copy of IE browser installed. I check IE (have IE9) and sure enough -- it just spins. No error message, not timeout, no 'try later' -- just spins -- as far as I can tell, indefinitely. Any page, any address. Even access to a localhost site just spins. Chrome works fine. So does another program I have that fetches market data. Windows 'diagnose and repair' says my internet connection is working fine. I tried uninstall/re-install of IE. Same spinning. I tried to install Windows Updates, and guess what? I can't. I comes up with error 80072efd; checked documentation for the error and it says I should check firewall blockage. Thing is, the only firewall I have is Windows Firewall, and obviously it wouldn't be blocking Windows Update. In contrast, Windows 'Help' in all programs has no problem accessing the Internet. I had a filter on the internet connection, and this was updated just prior to first appearance of the problem. But I uninstalled the filter entirely (official, with passwd from the company's service rep) -- and no difference. I'm guessing that a high level Windows network service file is corrupted -- used only by MS programs and their ilk, but how do I find it? I'd like to avoid having to do a clean install of Windows. Much obliged for any insight. IG Ramhound -- Thanks for reply. I'm familiar with virtual machines as in e.g. JVM or an emulator for an alternative architecture or (theoretical) Turing Machine equivalence. But I'm not familiar with the way you're using the term. Please clarify -- what one needs for this VM 'test' and why you expect it will provide an advantage of insight into the problem. And what sort of 'configuration issue' are you referring to? IG

    Read the article

  • Asterisk: Dropping calls with an "ast_yyerror"

    - by Nick
    I'm having an intermittent issue where asterisk will play our greeting to the caller, and then drop the call instead of making our phones ring. I'm unable to reproduce the problem with any phones I have here, and many callers get through just fine. Some callers though, run into the problem, and I can't find any pattern to it. The bit of information I could find said it was caused by an error in evaluating a dialplan expression. I'm thinking it's this line: exten = START,n,GotoIf($[${FORCE_CLOSED}=TRUE]?CLOSED,1) But I'm not sure what's wrong with it. I see the following error on the console: [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:459 ast_yyerror: ast_yyerror(): syntax error: syntax error, unexpected '=', expecting $end; Input:=TRUE^ Surrounding Console output: -- Executing [START@AGInbound:1] Answer("IAX2/AtlantaTeliax-10086", "") in new stack -- Executing [START@AGInbound:2] BackGround("IAX2/AtlantaTeliax-10086", 0000_AG_THANK_YOU_FOR_CALLING_AG") in new stack -- Playing '0000_AG_THANK_YOU_FOR_CALLING_AG.slin' (language 'en') [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:459 ast_yyerror: ast_yyerror(): syntax error: syntax error, unexpected '=', expecting $end; Input: =TRUE ^ [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:463 ast_yyerror: If you have questions, please refer to doc/tex/channelvariables.tex in the asterisk source. -- Executing [START@AGInbound:3] GotoIf("IAX2/AtlantaTeliax-10086", "?CLOSED,1") in new stack -- Executing [START@AGInbound:4] GotoIfTime("IAX2/AtlantaTeliax-10086", "9:30-17:0|mon-fri|*|*?OPEN,1") in new stack -- Executing [START@AGInbound:5] GotoIfTime("IAX2/AtlantaTeliax-10086", "10:0-18:30|sat|*|*?OPEN,1") in new stack -- Executing [START@AGInbound:6] GotoIfTime("IAX2/AtlantaTeliax-10086", "12:0-17:0|sun|*|*?OPEN,1") in new stack Relevant lines from the dial plan: exten = START,1,Answer() exten = START,n,Background(0000_AG_THANK_YOU_FOR_CALLING_AG) ; See if we're open ; Force Closed if no one's going to be answering exten = START,n,GotoIf($[${FORCE_CLOSED}=TRUE]?CLOSED,1) exten = START,n,GotoIfTime(${AG_WEEKDAY_OPEN_HOUR}:${AG_WEEKDAY_OPEN_MIN}-${AG$ exten = START,n,GotoIfTime(${AG_SATURDAY_OPEN_HOUR}:${AG_SATURDAY_OPEN_MIN}-${$ exten = START,n,GotoIfTime(${AG_SUNDAY_OPEN_HOUR}:${AG_SUNDAY_OPEN_MIN}-${AG_S$ ; ...and we're not. But maybe the time of day has been overridden? exten = START,n,GotoIf($[${OVERRIDE_TIME_OF_DAY}=TRUE]?OPEN,1) ; No override... We're definatly closed. exten = START,n,Goto(CLOSED,1) Any idea what's wrong with the expression? We recently upgraded from 1.4 to 1.6.

    Read the article

  • How To Configure Remote Desktop To Hyper-V Guest Virtual Machines

    - by Brian Jackett
    Configuring Remote Desktop (RDP) from a host Hyper-V machine to a guest virtual machine can be tricky, so this post is dedicated to the issues and resolution steps I went through to allow RDP.  Cutting to the point, below are the things to look for followed by some explanation about my scenario if you care to read.  This is not an exhaustive list of what is required, just the items that were causing problems for my particular scenario. Requirements Allow Remote Desktop Connections in guest OS. The network adapter type must allow communication with host machine (e.g. use an “Internal” virtual adapter.) If running Server 2008 R2 on guest, network discovery mode must be turned on. If running Server 2008 R2 on guest, the services supporting network discovery mode must be running: - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host My Environment     A quick word about my environment.  I am running Windows Server 2008 R2 with Hyper V on my laptop and numerous guest VMs running Windows Server 2003 R2 or Windows Server 2008 R2.  I run a domain controller VM and then 1 or 2 SharePoint servers depending on my work needs.  I’ve found this setup to work well except when it comes to the display window for my VMs. The Issue     Ever since I began running Hyper-V I haven’t been able to RDP to my guest VMs which means the resolution for my connection windows ha been limited to what the native Hyper-V connections allow.  During personal use I can put the resolution up to 1152 x 864, but during presentations I am usually limited to a measly 800 x 600.  That is until today when I decided to fully investigate why I couldn’t connect via RDP.     First a thank you to John Ross (@johnrossjr), Christina Wheeler (@cwheeler76) and Clayton Cobb (@warrtalon) for various suggestions while I was researching tonight.  As it turns out I had not 1, not 2, but 3 items preventing me from using RDP.  Let’s dig into the requirements above. Allow RDP Connection     This item I had previously taken care of, but it bears repeating because by default Windows Server 2008 R2 does not allow RDP connections.  Change the setting from “Don’t allow…” to whichever “Allow connections…” setting suits your needs.  I chose the less secure option as this is just my dev laptop. Network Adapter Type     When I originally configured my VMs I configured each to use 2 network adapters: one using the physical ethernet adapter for internet use and a virtual private adapter for communication between the VMs.  The connection for the ethernet adapter is an "”External” adapter and thus doesn’t connect between the host and guest.  The virtual private adapter allowed communication ONLY between the VMs and not to my host.  There is a third option “Internal” which allows communication between VMs as well as to the host.  After finding out this distinction I promptly created an Internal network adapter and assigned that to my VMs. Turn On Network Discovery     Seems like a pretty common sense thing, but in order to allow remote desktop connections the target computer must able to be found by the source computer (explained here.)  One of the settings that controls if a computer can be found on the network is aptly named Network Discovery.  By default Windows Server 2008 R2 turns Network Discovery off for security purposes.  To enable it open up the Network and Sharing Center.  Click “Change Advanced Sharing Settings” on the left.  On the following screen select “Turn on network discovery” for the currently used profile and click Save Settings.  You may notice though that your selection to turn on network discovery doesn’t save.  If this is the case then you most likely don’t have the supporting services running (as was my case.) Network Discovery Supporting Services     There are a total of 4 services (listed again below) that need to be running before you can turn on network discovery (explained here.)  The below images highlight these services.  In my guest VM I found that I had DNS Client already running while the other 3 were disabled.  I set them all to enabled and started the ones that were stopped.  After this change I returned to the Sharing settings screen and found that Network Discovery was turned on.  I’m not sure whether this was picking up my attempt to turn it on previously or if starting those services turned it on.  Either way the end result was a success. - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host Before and After Results     The first image is the smaller square shaped viewing window used by the Hyper-V native connection.  The second is the full-screen RDP connection in all its widescreen glory. Conclusion     Over the past few months I’ve found Hyper-V to be very useful for virtualizing my development environments, but I’ve also had a steep learning curve to get various items configured just right.  Allowing RDP connections to guest VMs was one area that I hadn’t been able to get right for the longest time.  Now that I resolved these issues I hope that others can avoid the pitfalls that I ran into.  If you know of any other items I left off feel free to let me know.        -Frog Out   Links Turning on Network Discovery http://sqlblog.com/blogs/john_paul_cook/archive/2009/08/15/remote-desktop-connection-on-windows-server-2008-r2.aspx Services required for Network Discovery http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/2e1fea01-3f2b-4c46-a631-a8db34ed4f84

    Read the article

  • Getting TF215097 error after modifying a build process template in TFS Team Build 2010

    - by Jakob Ehn
    When embracing Team Build 2010, you typically want to define several different build process templates for different scenarios. Common examples here are CI builds, QA builds and release builds. For example, in a contiuous build you often have no interest in publishing to the symbol store, you might or might not want to associate changesets and work items etc. The build server is often heavily occupied as it is, so you don’t want to have it doing more that necessary. Try to define a set of build process templates that are used across your company. In previous versions of TFS Team Build, there was no easy way to do this. But in TFS 2010 it is very easy so there is no excuse to not do it! :-)   I ran into a scenario today where I had an existing build definition that was based on our release build process template. In this template, we have defined several different build process parameters that control the release build. These are placed into its own sectionin the Build Process Parameters editor. This is done using the ProcessParameterMetadataCollection element, I will explain how this works in a future post.   I won’t go into details on these parametes, the issue for this blog post is what happens when you modify a build process template so that it is no longer compatible with the build definition, i.e. a breaking change. In this case, I removed a parameter that was no longer necessary. After merging the new build process template to one of the projects and queued a new release build, I got this error:   TF215097: An error occurred while initializing a build for build definition <Build Definition Name>: The values provided for the root activity's arguments did not satisfy the root activity's requirements: 'DynamicActivity': The following keys from the input dictionary do not map to arguments and must be removed: <Parameter Name>.  Please note that argument names are case sensitive. Parameter name: rootArgumentValues <Parameter Name> was the parameter that I removed so it was pretty easy to understand why the error had occurred. However, it is not entirely obvious how to fix the problem. When open the build definition everything looks OK, the removed build process parameter is not there, and I can open the build process template without any validation warnings. The problem here is that all settings specific to a particular build definition is stored in the TFS database. In TFS 2005, everything that was related to a build was stored in TFS source control in files (TFSBuild.proj, WorkspaceMapping.xml..). In TFS 2008, many of these settings were moved into the database. Still, lots of things were stored in TFSBuild.proj, such as the solution and configuration to build, wether to execute tests or not. In TFS 2010, all settings for a build definition is stored in the database. If we look inside the database we can see what this looks like. The table tbl_BuildDefinition contains all information for a build definition. One of the columns is called ProcessParameters and contains a serialized representation of a Dictionary that is the underlying object where these settings are stoded. Here is an example:   <Dictionary x:TypeArguments="x:String, x:Object" xmlns="clr-namespace:System.Collections.Generic;assembly=mscorlib" xmlns:mtbwa="clr-namespace:Microsoft.TeamFoundation.Build.Workflow.Activities;assembly=Microsoft.TeamFoundation.Build.Workflow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <mtbwa:BuildSettings x:Key="BuildSettings" ProjectsToBuild="$/PathToProject.sln"> <mtbwa:BuildSettings.PlatformConfigurations> <mtbwa:PlatformConfigurationList Capacity="4"> <mtbwa:PlatformConfiguration Configuration="Release" Platform="Any CPU" /> </mtbwa:PlatformConfigurationList> </mtbwa:BuildSettings.PlatformConfigurations> </mtbwa:BuildSettings> <mtbwa:AgentSettings x:Key="AgentSettings" Tags="Agent1" /> <x:Boolean x:Key="DisableTests">True</x:Boolean> <x:String x:Key="ReleaseRepositorySolution">ERP</x:String> <x:Int32 x:Key="Major">2</x:Int32> <x:Int32 x:Key="Minor">3</x:Int32> </Dictionary> Here we can see that it is really only the non-default values that are persisted into the databasen. So, the problem in my case was that I removed one of the parameteres from the build process template, but the parameter and its value still existed in the build definition database. The solution to the problem is to refresh the build definition and save it. In the process tab, there is a Refresh button that will reload the build definition and the process template and synchronize them:   After refreshing the build definition and saving it, the build was running successfully again.

    Read the article

  • jenkins-maven-android when running throwing the error "android-sdk-linux/platforms" is not a directory"

    - by Sam
    I start setting up the jenkins-maven-android and i'm facing an issue when running the jenkin job. My Machine Details $uname -a Linux development2 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux Steps to install the Android SDK in Ubuntu https://help.ubuntu.com/community/AndroidSDK since i'm working on headless env (ssh to client machine) i used following command to install the platform tools android update sdk --no-ui download apache maven and install on http://maven.apache.org/download.html mvn -version output root@development2:/opt/android-sdk-linux/tools# mvn -version Apache Maven 3.0.4 (r1232337; 2012-01-17 08:44:56+0000) Maven home: /opt/apache-maven-3.0.4 Java version: 1.6.0_24, vendor: Sun Microsystems Inc. Java home: /usr/lib/jvm/java-6-openjdk/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.0.0-12-virtual", arch: "amd64", family: "unix" root@development2:/opt/android-sdk-linux/tools# ran the following two command as mention in below sudo apt-get update sudo apt-get install ia32-libs Problems with Eclipse and Android SDK http://developer.android.com/sdk/installing/index.html As error suggest i gave the path to android SDK in jenkins build config still im getting the error clean install -Dandroid.sdk.path=/opt/android-sdk-linux Can someone help me to resolve this. Thanks Error I'm Getting Waiting for Jenkins to finish collecting data mavenExecutionResult exceptions not empty message : Failed to execute goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources (default-generate-sources) on project base-template: Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. cause : Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. Stack trace : org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources (default-generate-sources) on project base-template: Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:225) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.jvnet.hudson.maven3.launcher.Maven3Launcher.main(Maven3Launcher.java:79) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.plexus.classworlds.launcher.Launcher.launchStandard(Launcher.java:329) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:239) at org.jvnet.hudson.maven3.agent.Maven3Main.launch(Maven3Main.java:158) at hudson.maven.Maven3Builder.call(Maven3Builder.java:98) at hudson.maven.Maven3Builder.call(Maven3Builder.java:64) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:679) Caused by: org.apache.maven.plugin.PluginExecutionException: Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:110) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) ... 27 more Caused by: com.jayway.maven.plugins.android.InvalidSdkException: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. at com.jayway.maven.plugins.android.AndroidSdk.assertPathIsDirectory(AndroidSdk.java:125) at com.jayway.maven.plugins.android.AndroidSdk.getPlatformDirectories(AndroidSdk.java:285) at com.jayway.maven.plugins.android.AndroidSdk.findAvailablePlatforms(AndroidSdk.java:260) at com.jayway.maven.plugins.android.AndroidSdk.<init>(AndroidSdk.java:80) at com.jayway.maven.plugins.android.AbstractAndroidMojo.getAndroidSdk(AbstractAndroidMojo.java:844) at com.jayway.maven.plugins.android.phase01generatesources.GenerateSourcesMojo.generateR(GenerateSourcesMojo.java:329) at com.jayway.maven.plugins.android.phase01generatesources.GenerateSourcesMojo.execute(GenerateSourcesMojo.java:102) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) ... 28 more channel stopped Finished: FAILURE* android home Echo root@development2:~# echo $ANDROID_HOME /opt/android-sdk-linux

    Read the article

  • Ask How-To Geek: Dropbox in the Start Menu, Understanding Symlinks, and Ripping TV Series DVDs

    - by Jason Fitzpatrick
    This week we take a look at how to incorporate Dropbox into your Windows Start Menu, understanding and using symbolic links, and how to rip your TV series DVDs right to unique and high-quality episode files. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see our fixes for this week’s reader dilemmas. Add Drobox to Your Start Menu Dear How-To Geek, I use Dropbox all the time and would like to add it right onto my start menu along side the other major shortcuts like Documents, Pictures, etc. It seems like adding Dropbox into the menu should be part of the Dropbox installation package! Sincerely, Dropboxing in Des Moines Dear Dropboxing, We agree, it would be a nice installation option. As it stands you’re going to have to do a little simple hacking to get Dropbox nestled neatly into your start menu. The hack isn’t super elegant but when you’re done you’ll have the link you want and it’ll look like it was there all along. Check out this step-by-step guide here in order to take an existing Library shortcut and rework it to be a Dropbox link. Understanding and Using Symbolic Links Dear How-To Geek, I was talking to a coworker the other day about an issue I’d been having with a media center application I’m running. He suggested using symbolic links to better organize my media and make it easier for the application to access my collection. I had no idea what he was talking about and never got a chance to bug him about it later. Can you clear up this whole symbolic links business for me? I’ve been using computers for years and I’ve never even heard of it! Sincerely, Symbolic Who? Dear Symbolic, Symbolic links aren’t commonly used by many Windows users which is why you likely haven’t run into the concept. Symbolic links are essentially supercharged shortcuts—the newly introduced Windows library system is really just a type of symbolic link system. You can use symbolic links to do all sorts of neat stuff like link folders to your Dropbox folder, organize media, and more. The concept of symbolic links is pretty simple but the execution can be really tricky. We’d suggest reading over our guide to creating symbolic links in Windows 7, Windows XP, and Ubunutu to get a clearer idea what you’re getting into. Rip Your TV DVDs into Handy Episode Files Dear How-To Geek, My wife got me an iPod for Christmas and I still haven’t got around to filling it up. I have tons of entire TV show seasons on DVD and would like to get them on the iPod but I have absolutely no idea where to start. How do I get the shows off the discs? I thought it would be as easy to import the TV shows into iTunes as it is to import tracks off a CD but I was totally wrong. I tried downloading some applications to rip them but those didn’t work at all. Very frustrating! Surely there is an easy and/or automated way to do this, right? Sincerely, Free My DVDs Dear DVDs, Oh man is this a frustration we can relate to. It’s inordinately difficult to get movies and TV shows off physical media and into digital (and portable media player-friendly) formats. There are a multitude of ways to rip DVDs and quite a few applications out there (some good, some mediocre, and some outright malware). We’d recommend a two-part punch to solve your ripping woes. You’ll need a copy of DVDFab to strip away the protections on the discs and rip the disc and Handbrake to load the disc image and convert the files. It’s not quite as smooth as the CD-to-iTunes workflow but it’s still pretty easy. Check out all the steps and settings you’ll want to toggle here. Have a question you want to put before the How-To Geek staff? Shoot us an email at [email protected] and then keep an eye out for a solution in the Ask How-To Geek column. Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines Google’s New Personal Blocklist Extension Kills Search Engine Spam KeyCounter Tracks Your Keystrokes and Mouse Clicks Add Custom LED Ambient Lighting to Your PC or Media Center The Trackor Monitors Amazon Prices; Integrates with Chrome, Firefox, and Safari Four Awesome TRON Legacy Themes for Chrome and Iron Anger is Illogical – Old School Style Instructional Video [Star Trek Mashup]

    Read the article

  • Framework 4 Features: Summary of Security enhancements

    - by Anthony Shorten
    In the last log entry I mentioned one of the new security features in Oracle Utilities Application Framework 4.0.1. Security is one of the major "tent poles" (to borrow a phrase from Steve Jobs) in this release of the framework. There are a number of security related enhancements requested by customers and as a result of internal reviews that we have introduced. Here is a summary of some of the security enchancements we have added in this release: Security Cache Changes - Security authorization information is automatically cached on the server for performance reasons (security is checked for every single call the product makes for all modes of access). Prior to this release the cache auto-refreshed every 30 minutes (or so). This has beem made more nimble by supporting a cache refresh every minute (or so). This means authorization changes are reflected quicker than before. Business Level security - Business Services are configurable services that are based upon Application Services. Typically, the business service inherited its security profile from its parent service. Whilst this is sufficient for most needs, it is now required to further specify security on the Business Service definition itself. This will allow granular security and allow the same application service to be exposed as different Business Services with their own security. This is particularly useful when you base a Business Service on a query zone. User Propogation - As with other client server applications, the database connections are pooled and shared as needed. This means that a common database user is used to access the database from the pool to allow sharing. Unfortunently, this means that tracability at the database level is that much harder. In Oracle Utilities Application Framework V4 the end userid is now propogated to the database using the CLIENT_IDENTIFIER as part of the Oracle JDBC connection API. This not only means that the common database userid is still used but the end user is indentifiable for the duration of the database call. This can be used for monitoring or to hook into Oracle's database security products. This enhancement is only available to Oracle Database customers. Enhanced Security Definitions - Security Administrators use the product browser front end to control access rights of defined users. While this is sufficient for most sites, a new security portal has been introduced to speed up the maintenance of security information. Oracle Identity Manager Integration - With the popularity of Oracle's Identity Management Suite, the Framework now provides an integration adapter and Identity Manager Generic Transport Connector (GTC) to allow users and group membership to be provisioned to any Oracle Utilities Application Framework based product from Oracle's Identity Manager. This is also available for Oracle Utilties Application Framework V2.2 customers. Refer to My Oracle Support KBid 970785.1 - Oracle Identity Manager Integration Overview. Audit On Inquiry - Typically the configurable audit facility in the Oracle Utilities Application Framework is used to audit changes to records. In Oracle Utilities Application Framework the Business Services and Service Scripts could be configured to audit inquiries as well. Now it is possible to attach auditing capabilities to zones on the product (including base package ones). Time Zone Support - In some of the Oracle Utilities Application Framework based products, the timezone of the end user is a factor in the processing. The user object has been extended to allow the recording of time zone information for use in product functionality. JAAS Suport - Internally the Oracle Utilities Application Framework uses a number of techniques to validate and transmit security information across the architecture. These various methods have been reconciled into using Java Authentication and Authorization Services for standardized security. This is strictly an internal change with no direct on how security operates externally. JMX Based Cache Management - In the last bullet point, I mentioned extra security applied to cache management from the browser. Alternatively a JMX based interface is now provided to allow IT operations to control the cache without the browser interface. This JMX capability can be initiated from a JSR120 compliant JMX console or JMX browser. I will be writing another more detailed blog entry on the JMX enhancements as it is quite a change and an exciting direction for the product line. Data Patch Permissions - The database installer provided with the product required lower levels of security for some operations. At some sites they wanted the ability for non-DBA's to execute the utilities in a controlled fashion. The framework now allows feature configuration to allow delegation for patch execution. User Enable Support - At some sites, the use of temporary staff such as contractors is commonplace. In this scenario, temporary security setups were required and used. A potential issue has arisen when the contractor left the company. Typically the IT group would remove the contractor from the security repository to prevent login using that contractors userid but the userid could NOT be removed from the authorization model becuase of audit requirements (if any user in the product updates financials or key data their userid is recorded for audit purposes). It is now possible to effectively diable the user from the security model to prevent any use of the useridwhilst retaining audit information. These are a subset of the security changes in Oracle Utilities Application Framework. More details about the security capabilities of the product is contained in My Oracle Support KB Id 773473.1 - Oracle Utilities Application Framework Security Overview.

    Read the article

  • Difference between LASTDATE and MAX for semi-additive measures in #DAX

    - by Marco Russo (SQLBI)
    I recently wrote an article on SQLBI about the semi-additive measures in DAX. I included the formulas common calculations and there is an interesting point that worth a longer digression: the difference between LASTDATE and MAX (which is similar to FIRSTDATE and MIN – I just describe the former, for the latter just replace the correspondent names). LASTDATE is a dax function that receives an argument that has to be a date column and returns the last date active in the current filter context. Apparently, it is the same value returned by MAX, which returns the maximum value of the argument in the current filter context. Of course, MAX can receive any numeric type (including date), whereas LASTDATE only accepts a column of type date. But overall, they seems identical in the result. However, the difference is a semantic one. In fact, this expression: LASTDATE ( 'Date'[Date] ) could be also rewritten as: FILTER ( VALUES ( 'Date'[Date] ), 'Date'[Date] = MAX ( 'Date'[Date] ) ) LASTDATE is a function that returns a table with a single column and one row, whereas MAX returns a scalar value. In DAX, any expression with one row and one column can be automatically converted into the corresponding scalar value of the single cell returned. The opposite is not true. So you can use LASTDATE in any expression where a table or a scalar is required, but MAX can be used only where a scalar expression is expected. Since LASTDATE returns a table, you can use it in any expression that expects a table as an argument, such as COUNTROWS. In fact, you can write this expression: COUNTROWS ( LASTDATE ( 'Date'[Date] ) ) which will always return 1 or BLANK (if there are no dates active in the current filter context). You cannot pass MAX as an argument of COUNTROWS. You can pass to LASTDATE a reference to a column or any table expression that returns a column. The following two syntaxes are semantically identical: LASTDATE ( 'Date'[Date] ) LASTDATE ( VALUES ( 'Date'[Date] ) ) The result is the same and the use of VALUES is not required because it is implicit in the first syntax, unless you have a row context active. In that case, be careful that using in a row context the LASTDATE function with a direct column reference will produce a context transition (the row context is transformed into a filter context) that hides the external filter context, whereas using VALUES in the argument preserve the existing filter context without applying the context transition of the row context (see the columns LastDate and Values in the following query and result). You can use any other table expressions (including a FILTER) as LASTDATE argument. For example, the following expression will always return the last date available in the Date table, regardless of the current filter context: LASTDATE ( ALL ( 'Date'[Date] ) ) The following query recap the result produced by the different syntaxes described. EVALUATE     CALCULATETABLE(         ADDCOLUMNS(              VALUES ('Date'[Date] ),             "LastDate", LASTDATE( 'Date'[Date] ),             "Values", LASTDATE( VALUES ( 'Date'[Date] ) ),             "Filter", LASTDATE( FILTER ( VALUES ( 'Date'[Date] ), 'Date'[Date] = MAX ( 'Date'[Date] ) ) ),             "All", LASTDATE( ALL ( 'Date'[Date] ) ),             "Max", MAX( 'Date'[Date] )         ),         'Date'[Calendar Year] = 2008     ) ORDER BY 'Date'[Date] The LastDate columns repeat the current date, because the context transition happens within the ADDCOLUMNS. The Values column preserve the existing filter context from being replaced by the context transition, so the result corresponds to the last day in year 2008 (which is filtered in the external CALCULATETABLE). The Filter column works like the Values one, even if we use the FILTER instead of the LASTDATE approach. The All column shows the result of LASTDATE ( ALL ( ‘Date’[Date] ) ) that ignores the filter on Calendar Year (in fact the date returned is in year 2010). Finally, the Max column shows the result of the MAX formula, which is the easiest to use and only don’t return a table if you need it (like in a filter argument of CALCULATE or CALCULATETABLE, where using LASTDATE is shorter). I know that using LASTDATE in complex expressions might create some issue. In my experience, the fact that a context transition happens automatically in presence of a row context is the main reason of confusion and unexpected results in DAX formulas using this function. For a reference of DAX formulas using MAX and LASTDATE, read my article about semi-additive measures in DAX.

    Read the article

  • Building a database installer with WiX, datadude and Visual Studio 2010

    - by jamiet
    Today I have been using Windows Installer XML (WiX) to build an installer (.msi file) that would install a SQL Server database on a server of my choosing; the source code for that database lives in datadude (a tool which you may know by one of quite a few other names). The basis for this work was a most excellent blog post by Duke Kamstra entitled Implementing a WIX installer that calls the GDR version of VSDBCMD.EXE which coves the delicate intricacies of doing this – particularly how to call Vsdbcmd.exe in a CustomAction. Unfortunately there are a couple of things wrong with Duke’s post: Searching for “datadude wix” didn’t turn it up in the first page of search results and hence it took me a long time to find it. And I knew that it existed. If someone else were after a post on using WiX with datadude its likely that they would never have come across Duke’s post and that would be a great shame because its the definitive post on the matter. It was written in October 2009 and had not been updated for Visual Studio 2010. Well, this blog post is an attempt to solve those problems. Hopefully I’ve solved the first one just by following a few of my blogging SEO tips while writing this blog post, in the rest of it I will explain how I took Duke’s code and updated it to work in Visual Studio 2010. If you need to build a database installer using WiX, datadude and Visual Studio 2010 then you still need to follow Duke’s blog post so go and do that now. Below are the amendments that I made that enabled the project to get built in Visual Studio 2010: In VS2010 datadude’s output files have changed from being called Database.<suffix> to <ProjectName>_Database.<suffix>. Duke’s code was referencing the old file name formats. Duke used $(var.SolutionDir) and relative paths to point to datadude artefacts I have replaced these with Votive Project References http://wix.sourceforge.net/manual-wix3/votive_project_references.htm I commented out all references to MicrosoftSqlTypesDbschema in DatabaseArtifacts.wxi. I don't think this is produced in VS2010 (I may be wrong about that but it wasn't in the output from my project) Similarly I commented out component MicrosoftSqlTypesDbschema in VsdbcmdArtifacts.wxi. It wasn't where Duke's code said it should have been so am assuming/hoping it isn't needed. Duke's ?define block to work out appropriate SrcArchPath actually wasn't working for me (i.e. <?if $(var.Platform)=x64 ?> was evaluating to false)  so I just took out the conditional stuff and declared the path explicitly to the “Program Files (x86)” path. The old code is still there though if you need to put it back. None of the <RegistrySearch> stuff is needed for VS2010 - so I commented it all out! Changed to use /manifest option rather than /model option on vsdbcmd.exe command-line. Personal preference is all! Added a new component in order to bundle along the vsdbcmd.exe.config file Made the install of the Custom Action dependent on the relevant feature being selected for install. This one is actually really important – deselecting the database feature for installation does not, by default, stop the CustomAction from executing and so would cause an error - so that scenario needs to be catered for I have made my amended solution available for download at: http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20110210/InstallMyDatabase.zip It contains two projects: the WiX project and the datadude project that is the source to be deployed (for demo purposes it only contains one table). I have also made the .msi available although in order that it gets through file blockers I changed the name from InstallMyDatabase.msi to InstallMyDatabase.ms_ – simply rename the file back once you have downloaded it from: http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20110210/InstallMyDatabase.ms%5E_ .You can try it out for yourself – the only thing it does is dump the files into %Program Files%\MyDatabase and uses them to install a database onto a server of your choosing with a name of your choosing - no damaging side-affects. I will caveat this by saying “it works on my machine” and, not having access to a plethora of different machines, I haven’t tested it anywhere else. One potential issue that I know of is that Vsdbcmd.exe has a dependency on SQL Server CE although if you have SQL Server tools or Visual Studio installed you should be fine. Unfortunately its not possible to bundle along the SQL Server CE installer in the .msi because Windows will not allow you to call one installer from inside another – the recommended way to get around this problem is to build a bootstrapper to bundle the whole lot together but doing that is outside the scope of this blog post. If you discover any other issues then please let me know. Here are the screenshots from the installer: And once installed…. Hope this is useful! @jamiet 

    Read the article

  • Session Report: What’s New in JSF: A Complete Tour of JSF 2.2

    - by Janice J. Heiss
    On Wednesday, Ed Burns, Consulting Staff Member at Oracle, presented a session, CON3870 -- “What’s New in JSF: A Complete Tour of JSF 2.2,” in which he provided an update on recent developments in JavaServer Faces 2.2. He began by emphasizing that, “JavaServer Faces 2.2 continues the evolution of the Java EE standard user interface technology. Like previous releases, this iteration is very community-driven and transparent.” He pointed out that since JSF was introduced at the 2001 JavaOne Keynote, it has had a long and successful run and has found a home in applications where the UI logic resides entirely on the server where the model and UI logic is. In such cases, the browser performs fairly simple functions. However, developers can take advantage of the power of browsers, something that Project Avatar is focused on by letting developers author their applications so the UI logic is running on the client and communicating to the back end via RESTful web services. “Most importantly,” remarked Burns, “JSF 2.2 offers a really good migration path because even in the scope of one application you could have an app written with JSF that has its UI logic on the server and, on a gradual basis, you could migrate parts of the app over to use client-side technologies. This can be done at any level of granularity – per page or per collection of pages. It all depends on what you want to do.” His presentation, which focused on the basic new features of JSF 2.2, began by restating the scope of JSF and encouraged attendees to check out Roger Kitain’s session: CON5133 “Techniques for Responsive Real-Time Web UIs.” Burns explained that JSF has endured because, “We still need web apps that are maintainable, localizable, quick to build, accessible, secure, look great and are fun to use.” It is used on every continent – the curious can go here to check out where its unofficial usage is tracked. He emphasized the significance of the UI logic being substantially on the server. This: Separates Component Semantics from Rendering, Allows components to “own” their little patch of the UI -- encode/decode, And offers a well-defined lifecycle: Inversion of Control. Burns reminded attendees that JSR-344, the spec for JSF 2.2, is now on Java Community Process 2.8, a revised version of the JCP that allows for more openness and transparency. He then offered some tools for community access to JSF 2.2:    * Public java.net projects spec http://jsf-spec.java.net/ impl http://jsf.java.net/ Open Source: GPL+Classpath Exception    * Mailing Lists [email protected]                                Public readable archive, JSPA signed member read/write [email protected]                                     Public readable archive, any java.net member read/write                         All mail sent to jsr344-experts is sent to users. * Issue Tracker spec http://jsf-spec.java.net/issues/ impl http://jsf.java.net/issues/ JSF 2.2, which is JSR 344, has a Public Review Draft planned by December 2012 with no need for a Renewal Ballot. The Early Draft Review of JSR 344 was published on December 8, 2011. Interested developers are encouraged to offer their input. Six Big Ticket Features of JSF 2.2 Burns summarized the six big ticket features of JSF 2.2:* HTML5 Friendly Markup Support Pass through attributes and elements * Faces Flows* Cross Site Request Forgery Protection* Loading Facelets via ResourceHandler* File Upload Component* Multi-Templating He explained that he called it “HTML 5 friendly” because there is really nothing HTML 5 specific about it -- it could be 4. But it enables developers to use new elements that are present in HTML5 without having a JSF component library that is written to take advantage of those specifically. It gives the page author the ability to use plain HTML5 to write their page, but to still take advantage of the server-side available in JSF. He presented a demo showing JSF 2.2’s ability to leverage the expressiveness of HTML5. Burns then explained the significance of face flows, which offer function points and quantify how much work has taken place, something of great value to JSF users. He went on to talk about JSF 2.2.’s cross-site request forgery protection (CSRF) and offered details about how it protects applications against attack. Then he talked about JSF 2.2’s File Upload Component and explained that the final specification will have Ajax and non-Ajax support. The current milestone has non-Ajax support implemented. He then went on to explain its capacity to add facelets through ResourceHandler. Previously, JSF 2.0 added Facelets and ResourceHandler as disparate units; now in JSF 2.2 the two concepts are unified. Finally, he explained the concept of multi-templating in JSF 2.2 and went on to discuss more medium-level features of the release. For an easy, low maintenance way of staying in touch with JSF developments go to JSF’s Twitter page where every month or so, important updates are offered.

    Read the article

  • SQL SERVER – Disabled Index and Update Statistics

    - by pinaldave
    When we try to update the statistics, it throws an error as if the clustered index is disabled. Now let us enable the clustered index only and attempt to update the statistics of the table right after that. Have you ever come across the situation where a conversation never gets over and it continues even though original point of discussion has passed. I am facing the same situation in the case of Disabled Index. Here is the link to original conversations. SQL SERVER – Disable Clustered Index and Data Insert – Reader had a issue here with Disabled Index SQL SERVER – Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index – Reader asked the effect of Rebuilding Indexes The same reader asked me today – “I understood what the disabled indexes do; what is their effect on statistics. Is it true that even though indexes are disabled, they continue updating the statistics?“ The answer is very interesting: If you have disabled clustered index, you will be not able to update the statistics at all for any index. If you have enabled clustered index and disabled non clustered index when you update the statistics of the table, it automatically updates the statistics of the ALL (disabled and enabled – both) the indexes on the table. If you are not satisfied with the answer, let us go over a simple example. I have written necessary comments in the code itself to have a clear idea. USE tempdb GO -- Drop Table if Exists IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[TableName]') AND type IN (N'U')) DROP TABLE [dbo].[TableName] GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL ) GO -- Insert Some data INSERT INTO TableName SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' UNION ALL SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Five' GO -- Create Clustered Index ALTER TABLE [TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Now let us update the statistics of the table and check the statistics update date. -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO Now let us disable the indexes and check if they are disabled using sys.indexes. -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Let us try to update the statistics of the table. -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO /* -- Above operation should thrown following error Msg 1974, Level 16, State 1, Line 1 Cannot perform the specified operation on table 'TableName' because its clustered index 'PK_TableName' is disabled. */ When we try to update the statistics it throws an error as it clustered index is disabled. Now let us enable the clustered index only and attempt to update the statistics of the table right after that. -- Now let us rebuild clustered index only ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Check that all the indexes status SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO We can clearly see that even though the nonclustered index is disabled it is also updated. If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexesare also updated. -- Clean up DROP TABLE [TableName] GO The complete script is given below for easy reference. USE tempdb GO -- Drop Table if Exists IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[TableName]') AND type IN (N'U')) DROP TABLE [dbo].[TableName] GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL ) GO -- Insert Some data INSERT INTO TableName SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' UNION ALL SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Five' GO -- Create Clustered Index ALTER TABLE [TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO /* -- Above operation should thrown following error Msg 1974, Level 16, State 1, Line 1 Cannot perform the specified operation on table 'TableName' because its clustered index 'PK_TableName' is disabled. */ -- Now let us rebuild clustered index only ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Check that all the indexes status SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Clean up DROP TABLE [TableName] GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • Seriously, It’s Time to Get Your Content Act Together

    - by Mike Stiles
    Branded content, content marketing, social content, brand journalism, we’re seeing those terms more and more. Why? The technology tools are coming together. We should know. We can gather big data, crunch it, listen to the public, moderate, respond, get to know the customer intimately, know what they like, know what they want, we can target, distribute, amplify, measure engagement and reaction, modify strategy and even automate a great deal of all that. An amazing machine, a sleek, smooth-running engine has been built such that all the parts can interact and work together to deliver peak performance and maximum output. But that engine isn’t going anywhere without any gas. Content is the gas. Yes, we curate other people’s content. We can siphon their gas. There’s tech to help with that too. But as for the creation of original, worthwhile content made for a specific audience, our audience, machines can’t do that…at least not yet. Curated content is great. But somebody has to originate the content for it to be curated and shared. And since the need for good, curated content is obviously large and the desire to share is there, it’s a winning proposition for a brand to be a consistent producer of original content. And yet, it feels like content is an issue we’re avoiding. There’s a reluctance to build a massive pipeline if you have no idea what you’re going to run through it. The C-suite often doesn’t know what content is, that it’s different from ads, where to get it, who makes it, how long it should be, what the point of it is if there’s no hard sell of the product, what it costs, how to use it, how to measure it, how to make sure it’s good, or how to make sure it will keep flowing. It could be the reason many brands aren’t pulling the trigger on socially enabling the enterprise. And that’s a shame, because there are a lot of creative, daring, experimental, uniquely talented entertainers and journalists chomping at the bit to execute content for brands. But for many corporate executives, content is “weird,” and the people who make it are even weirder. The content side of the equation is human. It’s art, but art that can be informed by data. The natural inclination is for brands to turn to their agencies for such creative endeavors. But agencies are falling into one of two categories. They’re failing to transition from ads to content. In “Content Era, What’s the Role of Agencies?” Alexander Jutkowitz says agencies were made for one-hit campaigns, not ongoing content. Or, they’re ready and capable but can’t get clients to do the right things. Agencies have to make money, even if it means continuing to do the wrong things because that’s all the client will agree to. So what we wind up with in the pipeline is advertising, marketing-heavy content, content that was obviously created or spearheaded by non-creative executives, random & inconsistent content, copy written for SEO bots, and other completely uninteresting nightmares. Frank Rose, author of “The Art of Immersion,” writes, “Content without story and excitement is noise pollution.” In the old days, you made an ad and inserted it into shows made by people who knew what they were doing. You could bask in that show’s success and leverage their audience. Now, you are tasked with attracting, amassing and holding your own audience. You may just want to make, advertise and sell your widgets. But now there’s a war on for a precious commodity, attention. People are busy. They have filters to keep uninteresting and irrelevant things out. They value their time and expect value back when they give it up. Joe Pulizzi, founder of the Content Marketing Institute, says, "Your customers don't care about you, your products, your services…they care about themselves, their wants and their needs." Is it worth getting serious about content and doing it right? 61% of consumers feel better about a company that delivers custom content (Custom Content Council). Interesting content is one of the top 3 reasons people follow brands on social (Content+). 78% of consumers think organizations that provide custom content want to build good relationships with them (TMG Custom Media). On the B2B side, 80% of business decision makers prefer to get company info in a series of articles vs. an ad. So what’s the hang-up? Cited barriers to content marketing are lack of human resources (42%) and lack of budget (35%). 54% of brands don’t have a single on-site, dedicated content creator. And only 38% of brands have a content marketing strategy. Tech has built the biggest, most incredible stage for brands that’s ever been built. Putting something on that stage is your responsibility. Do a bad show, or no show at all, and you’ll be the beautiful, talented actress that never got discovered. @mikestilesPhoto: Gabriella Fabbri, stock.xchng

    Read the article

  • Using C# 4.0’s DynamicObject as a Stored Procedure Wrapper

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Overview Ignoring the fashion, I still make a lot of use of DALs – typically when inheriting a codebase with an established database schema which is full of tried and trusted stored procedures. In the DAL a collection of base classes have all the scaffolding, so the usual pattern is to create a wrapper class for each stored procedure, giving typesafe access to parameter values and output. DAL calls then looks like instantiate wrapper-populate parameters-execute call:       using (var sp = new uspGetManagerEmployees())     {         sp.ManagerID = 16;         using (var reader = sp.Execute())         {             //map entities from the output         }     }   Or rolling it all into a fluent DAL call – which is nicer to read and implicitly disposes the resources:   This is fine, the wrapper classes are very simple to handwrite or generate. But as the codebase grows, you end up with a proliferation of very small wrapper classes: The wrappers don't add much other than encapsulating the stored procedure call and giving you typesafety for the parameters. With the dynamic extension in .NET 4.0 you have the option to build a single wrapper class, and get rid of the one-to-one stored procedure to wrapper class mapping. In the dynamic version, the call looks like this:       dynamic getUser = new DynamicSqlStoredProcedure("uspGetManagerEmployees", Database.AdventureWorks);     getUser.ManagerID = 16;       var employees = Fluently.Load<List<Employee>>()                             .With<EmployeeMap>()                             .From(getUser);   The important difference is that the ManagerId property doesn't exist in the DynamicSqlStoredProcedure class. Declaring the getUser object with the dynamic keyword allows you to dynamically add properties, and the DynamicSqlStoredProcedure class intercepts when properties are added and builds them as stored procedure parameters. When getUser.ManagerId = 16 is executed, the base class adds a parameter call (using the convention that parameter name is the property name prefixed by "@"), specifying the correct SQL Server data type (mapping it from the type of the value the property is set to), and setting the parameter value. Code Sample This is worked through in a sample project on github – Dynamic Stored Procedure Sample – which also includes a static version of the wrapper for comparison. (I'll upload this to the MSDN Code Gallery once my account has been resurrected). Points worth noting are: DynamicSP.Data – database-independent DAL that has all the data plumbing code. DynamicSP.Data.SqlServer – SQL Server DAL, thin layer on top of the generic DAL which adds SQL Server specific classes. Includes the DynamicSqlStoredProcedure base class. DynamicSqlStoredProcedure.TrySetMember. Invoked when a dynamic member is added. Assumes the property is a parameter named after the SP parameter name and infers the SqlDbType from the framework type. Adds a parameter to the internal stored procedure wrapper and sets its value. uspGetManagerEmployees – the static version of the wrapper. uspGetManagerEmployeesTest – test fixture which shows usage of the static and dynamic stored procedure wrappers. The sample uses stored procedures from the AdventureWorks database in the SQL Server 2008 Sample Databases. Discussion For this scenario, the dynamic option is very favourable. Assuming your DAL is itself wrapped by a higher layer, the stored procedure wrapper classes have very little reuse. Even if you're codegening the classes and test fixtures, it's still additional effort for very little value. The main consideration with dynamic classes is that the compiler ignores all the members you use, and evaluation only happens at runtime. In this case where scope is strictly limited that's not an issue – but you're relying on automated tests rather than the compiler to find errors, but that should just encourage better test coverage. Also you can codegen the dynamic calls at a higher level. Performance may be a consideration, as there is a first-time-use overhead when the dynamic members of an object are bound. For a single run, the dynamic wrapper took 0.2 seconds longer than the static wrapper. The framework does a good job of caching the effort though, so for 1,000 calls the dynamc version still only takes 0.2 seconds longer than the static: You don't get IntelliSense on dynamic objects, even for the declared members of the base class, and if you've been using class names as keys for configuration settings, you'll lose that option if you move to dynamics. The approach may make code more difficult to read, as you can't navigate through dynamic members, but you do still get full debugging support.     var employees = Fluently.Load<List<Employee>>()                             .With<EmployeeMap>()                             .From<uspGetManagerEmployees>                             (                                 i => i.ManagerID = 16,                                 x => x.Execute()                             );

    Read the article

< Previous Page | 864 865 866 867 868 869 870 871 872 873 874 875  | Next Page >