Search Results

Search found 2333 results on 94 pages for 'mr pixel'.

Page 25/94 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Get all Vector2 Points Within Radius

    - by CheapDevotion
    I am looking for a formula that will give me all of the Vector2 Points within a certain radius given the center. Essentially what I am trying to do is change the color of each pixel in a 256 x 256 texture that is within a certain radius from a specific pixel (Using the Unity3d Game Engine). Programming Language doesn't really matter, as I can probably convert it to something I can use.

    Read the article

  • Sharepoint 2010 - One or more services have started or stopped unexpectedly

    - by Mr Shoubs
    Does anyone know why I keep getting the following message in Sharepoint 2010 review problems and solutions list: The following services are managed by SharePoint, but their running state does not match what SharePoint expects: SPAdminV4. This can happen if a service crashes or if an administrator starts or stops a service using a non-SharePoint interface. If SharePoint-managed services do not match their expected running state, SharePoint will be unable to correctly distribute work to the service. Failing Services: SPTimerService (SPTimerV4) I assume this is referring to Sharepoint 2010 Timer Service but I can only see this in the services msc (and it is running), not in the Application Management > Manage services on server. Does anyone know why this keeps occurring and how I can stop it? Note - I looked in the event log and all I can see is that same error.

    Read the article

  • What does pidgin mean by "Host unknown"?

    - by Mr. Jefferson
    Pidgin is telling me my XMPP account was disconnected; the error message is "Host Unknown". What specifically does this indicate? Can it not find the server it's supposed to connect to (one in my office)? I can ping the server in the "Domain" account setting (under Basic) without a problem, and I even tried specifying an IP address in the "Connect server" account setting (under Advanced) without success.

    Read the article

  • Automating an SSRS 2008 R2 Report Snapshots and run report with most recent data

    - by Mr Shoubs
    I would like to automate a report snapshot, but there is only an option to take a snapshot in the Report History Tab. All the resources I've found suggest I need to go to processing options and select "Render this report from a snapshot". But I don't want to do that - when I go to a report, I want to get the most recent data. However daily at midnight I'd like to take a snapshot and store it in the history in case I want to compare the reports as of midnight for the last few weeks. Or am I doing this wrong and have to create a subscription instead? Note: this is for an auditing database and has way to much data in to query a range with more than 1 day in it - reports are restricted as such. (1 day has over 1 million rows on it's own).

    Read the article

  • How can I disable Lumension Sanctuary

    - by Mr. Flibble
    I'm an administrator on my computer but I can't work out how to uninstall/disable Lumension Sanctuary. There is no Uninstall button in the uninstall programs list. I've disabled the service (Sanctuary command and control) but it's had no effect. Any ideas? I'm on Windows XP BTW.

    Read the article

  • How to change cpufreq settings in Kubuntu

    - by Mr Woody
    I have been using Kubuntu, and I would like to change the cpufreq settings. My understanding is that there is no applet for that, and I would have to do it with a script. So I run a command like this: sudo cpufreq-set -g userspace -c 0 -d 800Mhz -u 1200Mhz and when I type cpufreq-info, I get cpufrequtils 007: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 1.20 GHz. The governor "userspace" may decide which speed to use within this range. current CPU frequency is 1.20 GHz. cpufreq stats: 2.50 GHz:70.06%, 2.50 GHz:0.97%, 2.00 GHz:4.85%, 1.60 GHz:0.35%, 1.20 GHz:2.89%, 800 MHz:20.88% (193873) analyzing CPU 1: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 1 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 2.00 GHz and 2.00 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 2.00 GHz. cpufreq stats: 2.50 GHz:83.43%, 2.50 GHz:1.03%, 2.00 GHz:4.28%, 1.60 GHz:0.01%, 1.20 GHz:1.74%, 800 MHz:9.50% (3208) which shows that everything worked well (on cpu 0). The problem is that if I run cpufreq-info again after few minutes I get cpufrequtils 007: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 800 MHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 800 MHz. cpufreq stats: 2.50 GHz:69.73%, 2.50 GHz:0.97%, 2.00 GHz:4.83%, 1.60 GHz:0.35%, 1.20 GHz:2.92%, 800 MHz:21.20% (193880) analyzing CPU 1: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 1 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 800 MHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 800 MHz. cpufreq stats: 2.50 GHz:82.94%, 2.50 GHz:1.03%, 2.00 GHz:4.33%, 1.60 GHz:0.01%, 1.20 GHz:1.73%, 800 MHz:9.96% (3215) so it looks like some other process changed the settings. Does anyone know how to fix this? I also tried many different settings, but I get similar behavior.

    Read the article

  • How do I prevent Pidgin from loading XMPP chat room history on join?

    - by Mr. Jefferson
    In Pidgin, when I join a chat room, it loads the chat room history. iChat on the Mac has a preference in the Accounts section to set a variable amount of history to load, or disable loading history entirely. How do I do the same thing in Pidgin? Is there a preference somewhere that I've missed? The object is to have the chat room start fresh each day, so I'd also be fine with disabling chat room history entirely on the server if that's possible. But I didn't see that option either when I looked in Server Admin on the server. I found this list of XMPP room types, and it looks like creating a Temporary Room might be the best way to do this, but I don't want to have to create the room manually every morning. Right now I've got Pidgin set to auto-join the room when I log in; I want it to do that without loading history. EDIT: The XMPP multi-user chat spec referenced above also contains a section on managing history. And I got this to work by pulling up the XMPP Console plugin in Pidgin, copying the <presence /> stanza it sent when I joined the room, closing the room, pasting the stanza into the console, adding the <history /> element and sending it. When I opened the room again, I had no history. But it all came back the next time! So: how do I get Pidgin to send the <history /> stanza by default?

    Read the article

  • Airport Extreme and Windows PC via regular LAN (not WIFI)

    - by Mr AJL
    So I got an airpoort extreme, and everything works beautifully on the mac, but my windows 7 PC which is connected via a regular ethernet cable can't see any network. The Win7 PC says there's no cable connected. Any ideas? Is there some kind of setting you have to enable with the Airport utility? I looked everywhere but can't find anything. Thanks!

    Read the article

  • VSFTPD does not allow upload with virtual users

    - by Mr. Squig
    I am attempting to setup VSFTPD with virtual users on a server running Ubuntu 12.04. I have configured the server to allow for virtual users to login, but I am having trouble getting it to allow uploads. My vsftpd.conf is as follows: listen=YES anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 anon_upload_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES virtual_use_local_privs=YES guest_enable=YES guest_username=virtual user_sub_token=$USER local_root=/var/www/$USER hide_ids=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem /etc/pam.d/vsftpd contains: auth required pam_pwdfile.so pwdfile /etc/vsftpd.passwd crypt=hash account required pam_permit.so crypt=hash I have two virtual users set up, one of which has the same name as a local user. They each have a directory in /var/www/ owned by 'virtual'. As I understand it, when a virtual user logs in this way they will appear to the system as the user virtual. Using this configuration user can log on, but cannot upload files. The error given in /var/log/vsftpd.log is: Tue Nov 20 19:49:00 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:07 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 3] [zac] FAIL CHMOD: Client "96.233.116.53", "/test.ppm 644" I have tried changing the permissions of these directories in all sorts of ways, but nothing seem to work. I have a feeling that it is something simple related to permissions. Any ideas?

    Read the article

  • DNS and DHCP not agreeing on an IP address

    - by Mr. Jefferson
    I'm having a problem where our Windows Server 2003 domain controller assigns my Windows 7 computer one IP address (x.x.x.75) via DHCP, but reports another (x.x.x.84) via DNS. This causes some interesting behavior on the network. If I change my adapter settings to get IP and DNS addresses from DHCP, I can access the internet, but no one on our network can access my computer. If I change my IP manually to what DNS says it is, I lose my internet access, but everyone can get to my computer again. I know that we have some old, invalid reverse DNS pointers hanging around (a reverse lookup on an IP address often gives more than one result, usually not including the one that is correct), so that could be contributing, but my problem is recent, and the invalid reverse pointers have been around a long time. What's going on, and how do I fix it?

    Read the article

  • IIS rewrite rule to check for querystring and add it if its not there

    - by M.R.
    I'm trying to make a IIS URL rewrite rule that appends an URL parameter to the URL. The url parameter is hssc. So, any url that is processed through the server, needs that parameter. Keeping in mind that some urls will have their own params already, and other urls won't, and root urls, etc, sometimes it will need to add ?hssc=1 or &hssc= - so, if I have a URL that is as such: http://www.blah.com should become http://www.blah.com/?hssc=1 http://www.blah.com/index.html should become http://www.blah.com/index.html?hssc=1 http://www.blah.com/?q=5 should become http://www.blah.com/q=5&hssc=1 http://www.blah.com/index.html?q=5 should become http://www.blah.com/index.html?q=5&hssc=1 http://www.blah.com/index.html?q=5&hssc=1 should be left alone I also want it that the URL should not be hidden (as in a backend rewrite behind the scenes). I need the URL to appear in the URL, so when users copy the URL, or bookmark it, the parameter is there. I've set the condition to match it \&hssc|\?hssc - now I just need a way to write the URL, so it appears and keeps the part of the original URL that is already there.

    Read the article

  • Make my git user and apache user have read/write/delete access

    - by Mr A
    I am having permission problems on my server. I use user developer to pull my git repository on the server. Then apache uses its own apache user to do write and execute code. I always have the problems when the app wants to write something in the directory (i.e: log files, and cache ...) if I execute a cron job and it uses my developer rights and wants to add something to the folders that is written by apache. My question is how to have my developer have the same write/delete access as my apache and avoid permission conflicts with each other? I am not fluent on linux command so, it would help if you could provide links or simply examples of doing so. thanks.

    Read the article

  • Windows 7 can't connect via ethernet to Airport Extreme

    - by Mr AJL
    I have an Airport Extreme router, and everything works beautifully on the Mac, but my Windows 7 computer, which is connected via a regular ethernet cable, can't see any network. The Windows computer says there's no cable connected. Any ideas? Is there some kind of setting you have to enable with the Airport utility? I looked everywhere but can't find anything. Or do I need to install that Airport utility something on the PC just to connect to the router? (Haven't tried because the PC has no CD drive)

    Read the article

  • Use synergy with Physical KVM

    - by Mr. Man
    I am using synergy on a Linux Mint computer as the server with a Mac as the client. I also have a physical KVM switch. The problem I have is that when ever I switch the physical KVM to my Mac, synergy stops working as in the keyboard and mouse don't work with the Mac. Thanks in advance! EDIT: here are some logs: From the Mint machine: INFO: synergys.cpp,1042: Synergy server 1.3.1 on Linux 2.6.31-14-generic #48-Ubuntu SMP Fri Oct 16 14:04:26 UTC 2009 i686 DEBUG: synergys.cpp,1051: opening configuration synergy.conf DEBUG: synergys.cpp,1062: configuration read successfully DEBUG: CXWindowsScreen.cpp,847: XOpenDisplay(:0.0) DEBUG: CXWindowsScreenSaver.cpp,339: xscreensaver window: 0x00000000 DEBUG: CXWindowsScreen.cpp,117: screen shape: 0,0 1024x768 DEBUG: CXWindowsScreen.cpp,118: window is 0x03800004 DEBUG: CScreen.cpp,38: opened display DEBUG: CXWindowsScreen.cpp,679: registered hotkey F12 (id=efc9 mask=0000) as id=1 NOTE: synergys.cpp,500: started server INFO: CServer.cpp,1141: screen ubuntu shape changed NOTE: CClientListener.cpp,127: accepted client connection DEBUG: CClientProxy1_0.cpp,404: received client marks-mac.local info shape=-1024,0 2304x800 NOTE: CServer.cpp,278: client mac has connected INFO: CServer.cpp,447: switch from ubuntu to mac at -1024,393 INFO: CScreen.cpp,116: leaving screen DEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWinavDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG302)DEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDE47DEBUG: CXWindowsClipboard.cDEBUG: CXWindowsrset=utf-8 (633), text/plain (462) DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555: added f DEBUG: CXWindCXWDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG:SerDEBUG: CXWindowsClipboard.cpp,555: ed DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555owsClDEBUG: CXWindowsClipboard.cpp,555: 1DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: getDEBUG: CXWindowsClipboard.cpp,555: added f DEBUG: CXW8_STDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555: added fD textDEBUG: CXWindowsClipboard.cpp,555: added fDEBU DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipinDEBUG: CXWindowsClipboard.cpp,555:oardDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindCXWDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG:SerDEBUG: CXWindowsClipboard.cpp,555: ed DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555owsClDEBUG: CXWindowsClipboard.cpp,555: 1DEBUG: CXWindowsClipboard.cpp, s From the Mac: connecting to '192.168.3.5': 192.168.3.5:24800 connected to server entering screen leaving screen entering screen leaving screen stopped client

    Read the article

  • Linux Ubuntu: Updating the GRUB menu

    - by Mr X
    So apparently there are 2 versions of Linux on my laptop(I have a dual boot system with Linux and Windows 8 but Linux is the master OS). One of them uses the Kernel version 3.11.0 whose headers + source code are incompatible with my wireless driver. My laptop is a Toshiba Satellite with a Realtek RTL8188CE wireless lan.So the newer kernel version is still the first option on the menu and to get to Linux I use the "Previous versions of linux" which is running Kernel version 3.2.0-55. What can I do to update the grub menu so that Linux version with the 3.2.0-55 Kernel appears as the primary option? Do I need to get rid of/uninstall the newer kernel version? How can I do this without screwing up Linux entirely?

    Read the article

  • Restrict VPN client traffic to certain domains/IP

    - by mr-euro
    Hi Is there any way to restrict a VPN client to only route certain traffic via the VPN and the rest via their local gateway? For example: traffic to a certain IP or domain gets routed across the VPN and all other requests do not. Let me know if you need more details. Thank you.

    Read the article

  • How does the linux update manager work?

    - by Mr.Student
    I want to know how the update manager for linux works. For instance, how does my linux distro check to see if there are any available updates for download and which servers to download these updates? If I am dealing with 3rd party software not apart of the main distro, how do those programs interact with my update manager to notify me that those programs have available updates? Lastly what would be some good literature on the subject?

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • update all the servers through one virtual servers using Storage are network virtual machine

    - by Mr.Calm
    Using UBUNTU and Virtal Box by Oracle, and Using this script to start nginx in Virtual Box, and placing it in Virtual box inside~/init.d #!/bin/bash ### BEGIN INIT INFO # Provides: Testinit # Required-Start: # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO # RETVAL=0; start() { CurrentTime=$(date +%d/%m/%Y"-"%I:%M:%S) ./usr/local/nginx/sbin/nginx echo "Current Time:"$CurrentTime>>/home/server/Desktop/NginxLogs.txt echo "!Starting nginx!" >>/home/server/Desktop/NginxLogs.txt Like this i want to write auto script (setup.sh file) and place that script in all virtual boxes inside my system, for example 8 virtual boxes and in all Virtual boxes NGINX is installed. Now, The thing is i am facing problem when i want change something in setup.sh i have to go to each and every virtual box, or Communicate each Virtual machine through SSH from my main machine. i am thinking to write another script (ex: Update.sh),and inside that script we give one path of file which is saved and recently edited in main machine (ex: DummySetup.sh). as soon as i run that script all the setup.sh files which are saved in each virtual machines should update the change or replace contents with DummySetup.sh's contents. Hope this is possible thing. Help would be appreciated.Thanking you

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >