Search Results

Search found 73044 results on 2922 pages for 'custom data attribute'.

Page 218/2922 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Getting zeros between data while reading a binary file in C

    - by indiajoe
    I have a binary data which I am reading into an array of long integers using a C programme. hexdump of the binary data shows, that after first few data points , it starts again at a location 20000 hexa adresses away. hexdump output is as shown below. 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 0020000 0000 0000 0053 0000 0064 0000 006b 0000 0020010 0066 0000 0068 0000 0066 0000 005d 0000 0020020 0087 0000 0059 0000 0062 0000 0066 0000 ........ and so on... But when I read it into an array 'data' of long integers. by the typical fread command fread(data,sizeof(*data),filelength/sizeof(*data),fd); It is filling up with all zeros in my data array till it reaches the 20000 location. After that it reads in data correctly. Why is it reading regions where my file is not there? Or how will I make it read only my file, not anything inbetween which are not in file? I know it looks like a trivial problem, but I cannot figure it out even after googling one night.. Can anyone suggest me where I am doing it wrong? Other Info : I am working on a gnu/linux machine. (slax-atma distro to be specific) My C compiler is gcc.

    Read the article

  • AJAX Post Not Sending Data?

    - by Jascha
    I can't for the life of me figure out why this is happening. This is kind of a repost, so forgive me, but I have new data. I am running a javascript log out function called logOut() that has make a jQuery ajax call to a php script... function logOut(){ var data = new Object; data.log_out = true; $.ajax({ type: 'POST', url: 'http://www.mydomain.com/functions.php', data: data, success: function() { alert('done'); } }); } the php function it calls is here: if(isset($_POST['log_out'])){ $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('logOutSuccess')"; $connection->runQuery($query); // <-- my own database class... // omitted code that clears session etc... die(); } Now, 18 hours out of the day this works, but for some reason, every once in a while, the POST data will not trigger my query. (this will last about an hour or so). I figured out the post data is not being set by adding this at the end of my script... $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('POST FAIL')"; $connection->runQuery($query); So, now I know for certain my log out function is being skipped because in my database is the following data: if it were NOT being skipped, my data would show up like this: I know it is being skipped for two reasons, one the die() at the end of my first function, and two, if it were a success a "logOutSuccess" would be registered in the table. Any thoughts? One friend says it's a janky hosting company (hostgator.com). I personally like them because they are cheap and I'm a fan of cpanel. But, if that's the case??? Thanks in advance. -J

    Read the article

  • Cross-domain data access in JavaScript

    - by vit
    We have an ASP.Net application hosted on our network and exposed to a specific client. This client wants to be able to import data from their own server into our application. The data is retrieved with an HTTP request and is CSV formatted. The problem is that they do not want to expose their server to our network and are requesting the import to be done on the client side (all clients are from the same network as their server). So, what needs to be done is: They request an import page from our server The client script on the page issues a request to their server to get CSV formatted data The data is sent back to our application This is not a challenge when both servers are on the same domain: a simple hidden iframe or something similar will do the trick, but here what I'm getting is a cross-domain "access denied" error. They also refuse to change the data format to return JSON or XML formatted data. What I tried and learned so far is: Hidden iframe -- "access denied" XMLHttpRequest -- behaviour depends on the browser security settings: may work, may work while nagging a user with security warnings, or may not work at all Dynamic script tags -- would have worked if they could have returned data in JSON format IE client data binding -- the same "access denied" error Is there anything else I can try before giving up and saying that it will not be possible without exposing their server to our application, changing their data format or changing their browser security settings? (DNS trick is not an option, by the way).

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • How to deal with a flaw in System.Data.DataTableExtensions.CopyToDataTable()

    - by andy
    Hey guys, so I've come across something which is perhaps a flaw in the Extension method .CopyToDataTable. This method is used by Importing (in VB.NET) System.Data.DataTableExtensions and then calling the method against an IEnumerable. You would do this if you want to filter a Datatable using LINQ, and then restore the DataTable at the end. i.e: Imports System.Data.DataRowExtensions Imports System.Data.DataTableExtensions Public Class SomeClass Private Shared Function GetData() As DataTable Dim Data As DataTable Data = LegacyADO.NETDBCall Data = Data.AsEnumerable.Where(Function(dr) dr.Field(Of Integer)("SomeField") = 5).CopyToDataTable() Return Data End Function End Class In the example above, the "WHERE" filtering might return no results. If this happens CopyToDataTable throws an exception because there are no DataRows. Why? The correct behavior should be to return a DataTable with Rows.Count = 0. Can anyone think of a clean workaround to this, in such a way that whoever calls CopyToDataTable doesn't have to be aware of this issue? System.Data.DataTableExtensions is a Static Class so I can't override the behavior....any ideas? Have I missed something? cheers UPDATE: I have submitted this as an issue to Connect. I would still like some suggestions, but if you agree with me, you could vote up the issue at Connect via the link above cheers

    Read the article

  • Load some data from database and hide it somewhere in a web page

    - by kwokwai
    Hi all, I am trying to load some data (which may be up to a few thousands words) from the database, and store the data somewhere in a html web page for comparing the data input by users. I am thinking to load the data to a Textarea under Div tag and hide the the data: <Div id="reference" style="Display:none;"> <textarea rows="2" cols="20" id="database"> html, htm, php, asp, jsp, aspx, ctp, thtml, xml, xsl... </textarea> </Div> <table border=0 width="100%"> <tr> <td>Username</td> <td> <div id="username"> <input type="text" name="data" id="data"> </div> </td> </tr> </table> <script> $(document).ready(function(){ //comparing the data loaded from database with the user's input if($("#data").val()==$("#database").val()) {alert("error");} }); </script> I am not sure if this is the best way to do it, so could you give me some advice and suggest your methods please.

    Read the article

  • IPad SQLite Push and Pull Data from external MS SQL Server DB

    - by MattyD
    This carries on from my previous post (http://stackoverflow.com/questions/4182664/ipad-app-pull-and-push-relational-data). My plan is that when the ipad application starts I am going to pull data (config data i.e. Departments, Types etc etc relational data that is used across the system) from a webhosted MS SQL Server DB via a webservice and populate it into an SQL Lite DB on the IPad. Then when I load a listing I will pull the data over the line again via a webservice and populate it into the SQL Lite db on the ipad (than just run select commands to populate the listing). My questions are: 1. What is the most efficient way to transfer data across the line via the web? Everyone seems to do it a different way. My idea is that I will have a webService for each type of data pull (e.g. RetrieveContactListing) that will query the db and than convert that data into "something" to send across the line. My question really is what is the "something" that it should be converting into? 2. Everyone talks about odata services. Is this suited for applications where complex read and writes are needed? Ive created a simple iphone app before that talked to an sql server db (i just sent my own structured xml across the line) but now with this app the data calls are going to be a lot larger so efficiency is key.

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • Passing $_GET or $_POST data to PHP script that is run with wget

    - by Matt
    Hello, I have the following line of PHP code which works great: exec( 'wget http://www.mydomain.com/u1.php /dev/null &' ); u1.php acts to do various types of maintenance on my server and the above command makes it happen in the background. No problems there. But I need to pass variable data to u1.php before it's executed. I'd like to pass POST data preferably, but could accommodate GET or SESSION data if POST isn't an option. Basically the type of data being passed is user-specific and will vary depending on who is logged in to the site and triggering the above code. I've tried adding the GET data to the end of the URL and that didn't work. So how else might I be able to send the data to u1.php? POST data preferred, SESSION data would work as well (but I tried this and it didn't pick up the logged in user's session data). GET would be a last resort. Thanks!

    Read the article

  • Windows 7 Backup not backing up custom library?

    - by James McMahon
    I have created a custom Library under Windows 7 64bit professional to handle my source code. When I tried Windows Backup and Restore for the first time I get the following error Backup encountered a problem while backing up file C:\Windows\System32\config\systemprofile\Source. Error:(The system cannot find the file specified. (0x80070002)) I've found a thread on the error on the Microsoft answers site. But it appears to be 404 (there is a version in Google's Cache) and the thread starter never gets an answer to his issue that works. The official Microsoft answer on this is This problem is due to one or more profiles under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\ProfileList with missing ProfileImagePath. To check whether you have missing profiles: Open regedit, navigate to the above registry key. (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList). Expand the list Click on each of the profiles listed. The first 3 profiles should have ProfileImagePath value of %SystemRoot%\System32\Config\SystemProfile, %SystemRoot%\ServiceProfiles\LocalService, and %SystemRoot%\ServiceProfiles\NetworkService respectively. Starting from the 4th profile, the ProfileImagePath should contain path to the user profiles on your machine, such as C:\users\Christine If one or more of the profile has no profile image, then you have missing profiles. To work around this, delete the profile in question (Caution: The registry contains critical settings that are necessary for your system to function properly. Take extra caution while making changes) First, export the ProfileList key for safekeeping. (Right click on the key, choose “Export”, and save it to the desktop.) Right click on the profile in question, choose delete. Try backup again. This does not work for me. Anyone have any idea what is going on here?

    Read the article

  • Should windows services be created with custom users, or should I use one of LocalSystem/LocalServic

    - by Justin Dearing
    I'm asking the question in general for the average custom developed NT service or unix OSS daemon ported to windows with SCM support. However, at the moment my immediate concern is for mongodb. From my experience with UNIX I like all my services to run as different unprivileged users. The way this has translated to windows is as follows: Create a local (or domain if it has to talk to SQL server) windows user with a long random password (lately an ASCII85 encoded guid generated from a different machine). Set it to next expire and forbid it from changing its password. Remove that user from the "Users Group". Grant that user "Login as a Service" permission. Give it read permission to the folder where the app resides, and write permission to the logs and data files the applications use. Assign the user to the service. Troubleshoot until the service starts. My feeling is that the unprivileged users are less powerful than the 3 special service users. I also feel that by isolating which users run which services, I would limit collateral damage if a way to compromise one service was found.

    Read the article

  • SuPHP custom php.ini doesn't get read

    - by Mathieu Dumoulin
    Took me about 4 hours to get a FastCGI + SuPHP running off Ubuntu 11.10 and i'm now happy that it works mighty fine except for ONE big problem. Custom php.ini's don't seem to load. I tried changing some options and then firing off a phpinfo() and nothing changes in the phpinfo() which leads me to think that there is definitely a problem with the loading of the configuration file. <IfModule mod_suphp.c> AddHandler x-httpd-php .php <Location /> SuPHP_AddHandler x-httpd-php </Location> suPHP_ConfigPath /home/mdumoulin/Documents/tests/tests suPHP_Engine on </IfModule> As you can see, i took great care in making sure i wasn't referencing the php.ini file itself but the directory of the vhost. In the php.ini located in "/home/mdumoulin/Documents/tests/tests/php.ini", you can find: [PHP] error_reporting = E_ALL & ~E_DEPRECATED & ~E_NOTICE display_errors = Off And the log in /var/log/suphp/suphp.log doesn't contain anything relevant, (only old errors that occured before this post while i was testing suphp... So i'm stumped there, dunno what more i can do! Anyone got an idea? EDIT: FINALY, got time to work on this, i disabled FCGI and only enabled SuPHP but after restarting i still see "Server API: CGI/FastCGI". Is this what i should be getting or not? I believe that it's normal i get CGI since SUPHP works with a CGI... But i'm not too sure anymore...

    Read the article

  • Custom MS-DOS / FreeDOS

    - by user1801387
    Goal : Build a custom DOS to boot into. To automate tasks like formating a drive, or doing recovery. I've been using Grub4DOS to boot into these images. So far I've looking into taking a windows repair disk ISO and extracting. I can't seem to find the autoexec.bat in the disk. I really don't know where to look for the startup configuration file to change or how to add an autoexec.bat. I've tried MS-DOS 6.22. But it lacks the diskpart tool I require. I've tried extracting the images and adding it. Then I got a boot failed. I assume that after i added it. All the files when to lower case names and I assume that the OS is case sensitive. Then I've looking into using FreeDOS. But I don't know how it works at all. Partially because I can't seem to grasp the help/wiki's information. I looked into getting a bearbones release with just the kernel and I think it's the config.sys file. But I don't have any idea on how the packaging system works to incorporate diskpart into it. So really I'm in general looking for a small bootable DOS to where I can incorporate diskpart and setup an autoexec.bat for the actual function to carry out and to boot into. Thanks :) This is just for personal use also.

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • Are there any custom keyboard available for laptops

    - by Ahe
    My work laptop is a HP elitebook 8560w which I mainly use for programming. Usually I have a external keyboard but recently I have been working out of office and therefore have been using the laptops own keyboard. One thing has really started to bug me. The keyboard layout of this 15.6" laptop contains numpad but the arrow keys are really bad (too small). Also when programming, I really miss a standard inverted T-arrow keys and the home/end/PgUp/PgDn buttons. Then it occurred to me; I would rather give up a numpad than a standard arrow keys. (The keyboard real estate in 15.6" laptop would allow this, and I really have to agree with Jeff Atwood here http://www.codinghorror.com/blog/2009/02/have-keyboard-will-program.html) Which brings me to my question. Do any laptop manufacturers make custom keyboards for their laptops or is there some third party manufacturer who could supply these kind of special keyboards? Quick googling on this doesn't give any meaningful results. Looks like that I have to carry an external keyboard with me if someone here can't give any pointers.

    Read the article

  • AWS VPC ELB vs. Custom Load Balancing

    - by CP510
    So I'm wondering if this is a good idea. I have a Amazon AWS VPC setup with a public and private subnets. So I all ready get the Internet Gateway and NAT. I was going to setup all my web servers (Apache2 isntances) and DB servers in the private subnet and use a Load Balancer/Reverse Proxy to pick up requests and send them into the private subnets cluster of servers. My question then, is Amazons ELB's a good use for these, or is it better to setup my own custom instance to handle the public requests and run them through the NAT using nginx or pound? I like the second option just for the sake of having a instance I can log into and check. As well as taking advantage of caching and fail2ban ddos prevention, as well as possibly using fail safes to redirect traffic. But I have no experience with their ELB's, so I thought I'd ask your opinions. Also, if you guys have an opinion on this as well, would using the second option allow me to only have 1 public IP address and be able to route SSH connections through port numbers to respective instances? Thanks in advance!

    Read the article

  • adding a custom user folder on Ubuntu

    - by Narcolapser
    Question: How do you add a custom folder to the collection of user folders that come with Ubuntu? Info: I just loaded my netbook with Ubuntu Desktop 10.04LTS (Desktop because it is an aspire one and the Apocalypse seems to follow when ever i try to install netbook remix onto it). It comes with standard folders like Documents, Music, Pictures, Downloads(though this one doesn't appear until you actually download something), Videos, etc etc. These are handly little folders because they have little symbols on them and are nicely located in my file browser. it is basically like the folder lay out the windows had in vista. I do a lot of little programing on this computer so i have a folder in which i keep all these single kb code files. Obviously named "Code" that I keep in my home folder. But I would really like to it over listed next to my other user folders. In summary, how do you add a folder to the listing on the file browser. And, if possible, how do you give it an icon? (I understand fully that I will probably have to make said Icon) those two things are what I'm seeking to do. ~n P.s. please correct me if I'm using the wrong name. I just guessed and called them "User Folders" because they were folders the user uses. made sense. but if they have another name like "libraries" please say so. Thanks

    Read the article

  • Creating Custom ISO Images

    - by ericl42
    I am working on creating some custom ISO images using primarily Fedora and CentOS. I want the image to be a bootable live CD with some specific files on it. I also want it to have the option to be able to be downloaded to the hard drive. I've read some various articles but want to get a few more opinions since I've never done this before. Currently I'm trying 2 different methods. Install Fedora with the configuration exactly how I want it and then run the livecd-tools program to pull everything I currently have to an ISO. I haven't got this to work yet but I do see a few issues with it. Such as the default passwords I had to put in. Run a Fedora live CD and install a few things I want on it and then copy the image of it. I believe this would work better since it has more of a live cd feel. However I"m not 100% sure how I should go about pulling the current image to my own ISO. I know some people have said to use mkisofs and a few other programs but any advice would be greatly appreciated.

    Read the article

  • Reliable custom Windows shortcut keys?

    - by Peter Baer
    I have global Windows shortcut keys assigned to several different cmd.exe instances. I do this by creating shortcuts to cmd.exe on my desktop, and assigning each one a unique shortcut key (for example, CTRL + SHIFT + U). Pretty basic stuff. I'm using Win2K8 (R1 and R2). This works just fine... most of the time. But with infuriating regularity, sometimes it doesn't. Or it will work with a long delay (many seconds). It doesn't matter what app currently has focus (it can even be one of the command prompts). It doesn't matter what keys I assign (I've tried a few variations of WIN, CTRL and SHIFT). I did notice that this is often, but not always, correlated with explorer.exe struggling in some way or another (say, an explorer window opened to a file share that's unavailable, or an app being unresponsive, or whatever). In other words the shortcut key handling appears to be very sensitive to unrelated system activity. Note that whenever I have this problem I can always successfully ALT + TAB to the window I want to get to, but that's tedious. I use the shortcuts to these command windows hundreds of times a day so even a 1% failure rate becomes really annoying. Is there a way to fix this, or is there some third-party utility out there that will RELIABLY intercept custom key combinations to bring focus to whatever apps I want, in a way that is independent of other system activity? ADDENDUM: There is a property of the Windows shortcuts that I would not want to lose if switching to a third-party hotkey tool: Windows shortcuts are idempotent. Once you've launched a shortcut to some app, pressing the shortcut key combo again takes you to the already launched process - it does not launch a new process.

    Read the article

  • Custom keyboard map is causing issues with stuck keys

    - by Grumbel
    I have a Microsoft Ergonomic 4000 keyboard and I am running a custom keymap (dvorak with some stuff for umlauts): http://pingus.seul.org/~grumbel/tmp/md5/b054e11505c88e1bfc6ebd5da46bdb78-xmodmap_pke http://pingus.seul.org/~grumbel/tmp/md5/f5e42a5b8ba4a034c5945f719b3d2608-xmodmap_pm This used to work fine for years and it still does, except that I am now having issues with a stuck Mode_switch key. When I hit Control_R and Mode_switch at the same time (happens a lot by accident), the Mode_switch key gets into a 'stuck' state, all letters I type afterwards come out in their umlaut form as if Mode_switch is pressed. I can unstuck the Mode_switch by again hitting Control_R and Mode_switch at the same time, but that leaves Gnome in a broken state where it doesn't react to my Gnome keyboard shortcuts any longer. The key presses themselves are still registered by the window manager as one can see changes in the applications (cursor in Gnome Terminal will turn into an unfilled rect, as if the application lost focus), but don't trigger the bound action. Does anybody have a clue what could be causing this? Or does anybody has an idea how I could debug this? xev doesn't seem to help here, as it is reporting normal KeyPress/KeyRelease events, even when the key is stuck. Also the Gnome key bindings don't get reported at all when its in the 'broken' state. I assume they are captured by the window manager before they even reach xev. I am using Ubuntu 10.04 with Gnome and Metacity, I have disabled all OpenGL related effects, so Compiz shouldn't interfere. Some general info on which applications are involved in Gnomes key binding handling would be helpful as well, as I assume its metacity, but restarting metacity doesn't fix the issue.

    Read the article

  • Define custom escape sequences in terminal

    - by Ipkiss
    I would like to change the escape sequences used by some keys in my terminal. My goal is to define custom mappings in Vim (terminal version). In the following I use shift-space as an example, but I would prefer if the proposed solution could be generic. My current terminal (gnome-terminal) uses a simple space as escape sequence for shift-space, as can be seen by typing ctrl-v shift-space. A quick check with the true xterm shows the same behavior. I would like that the shift-space key combo generates another escape sequence (e.g., the one of shift-F30, which I would never use otherwise). So, how would I go about doing that? And is it really a good idea? Let me know if there are better alternatives... Note: I'm aware that this is only part of the problem: after the terminal sends a proper escape sequence for my keys, I still need to teach Vim what it means. But I think I know how to deal with that.

    Read the article

  • Mac - KeyRemap4MacBook - Custom XML

    - by DjRikyx
    I hope someone of you know this powerfull Pref Panel to remap the keyboard.. Since i'm using a PC Keyboard, i wanted to make Screenshot a bit easy to do.. I managed to get working: -Command+Shift+3 to Stamp (full screen screenshot) -Command+Shift+4 to Control+Stamp (Selection Cursor Screenshot) Now i want to remap Shift+Stamp to Command+Shift+4+Space to get Windows Screenshot.. i tried but nothing was working.. Here is my current XML.. i only need to add the last remap! <?xml version="1.0"?> <root> <item> <name>Custom PC Style Screenshot</name> <appendix>Command+Shift_L+4+Space to Shift+F13</appendix> <appendix>Command+Shift_L+4 to Control+F13</appendix> <appendix>Command+Shift_L+3 to F13</appendix> <identifier>private.custom_pc_style_screenshot</identifier> <autogen></autogen> <autogen>--KeyToKey-- KeyCode::F13, VK_CONTROL, KeyCode::KEY_4, ModifierFlag::COMMAND_L | ModifierFlag::SHIFT_L</autogen> <autogen>--KeyToKey-- KeyCode::F13, KeyCode::KEY_3, ModifierFlag::COMMAND_L | ModifierFlag::SHIFT_L</autogen> Hope someone of you can help me out! Thank you

    Read the article

  • mod_perl custom configuration directives don't work when placed in .htaccess and there is <Location>

    - by al_l_ex
    I'm trying to complete Redmine's feature request #2693: Use Redmine.pm to authenticate for any directory (1). I have not much knowledge on all these things and need help. Redmine uses mod_perl module Redmine.pm for authentication & authorization. This module defines several custom configuration directives. I've successfully modified patch from (1) and it works when all config is in <Location>: <Location /digischrank/test> AuthType basic AuthName "Digischrank Test" Require valid-user PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=SomedaTaBAse;host=localhost" RedmineDbUser "SoMeuSer" RedmineDbPass "SomePaSS" RedmineProject "digischrank" </Location> But when I move one of these directives (RedmineProject, see (1)) in .htaccess file, Redmine.pm doesn't see it! I've tried to change <Location> to <Directory> and add AllowOverride All. Directives from .htaccess is visible, but remaining ones from <Directory> - not. I don't want to move all directives to each .htaccess. When I add <Location> in addition to <Directory>, again - only directives from <Location> are visible. As far as I know, directives should be merged. I miss something?

    Read the article

  • Facter - custom fact, returns empty data set when invoked by Puppet agent

    - by user3684494
    According to this puppet labs article, I can create custom facts from shell scripts. I have created a bash script that returns a single fact, it is packaged in a modules facts.d directory. The module is included on the target system via an ENC class. When invoked by the puppet agent on the target it returns an empty set, when run by hand on the agent it correctly returns the fact. The script has execute permission on the master, but does not have it on the agent. I saw a bug report related to permissions and file types, but that was windows and supposed to be fixed in puppet version 3. What am I doing wrong? ENC definition: --- classes: facttest: Shell script: #!/bin/bash echo "test_fact1=$(hostname)" Permissions: master: -rwxr-xr-x 1 root root ... modules/facttest/facts.d/testfact.sh agent: -rw-r--r-- 1 root root ... /var/lib/puppet/facts.d/testfact.sh Agent message: Fact file /var/lib/puppet/facts.d/testfact.sh was parsed but returned an empty data set Version information: Puppet master: 3.5.1 (Debian) Facter master: 2.0.1 Puppet agent: 3.6.1 (OpenSUSE) Facter agent: 2.0.1

    Read the article

  • Exchange 2010: Send emails via STMP with custom From address to outside the domain

    - by marsze
    The requirement(s): (1) Connect to Exchange via STMP and (2) basic authentication and send emails with a (3) custom From address to (4) recipients outside the domain. I was able to get (1) - (3) working. I created a dedicated receive connector for this task and configured it like this: Permissions: ms-Exch-SMTP-Accept-Any-Recipient (for authenticated users) ms-Exch-SMTP-Accept-Authoritative-Domain-Sender (for authenticated users) ms-Exch-SMTP-Accept-Any-Sender (for authenticated users) Authentication: TLS Basic Authentication (without TLS) Exchange Server Authentication However, I'm still struggeling with (4): I can send with "fake" From addresses to recipients inside the domain. Also, I can send with the original From address to recipients outside the domain. Can you tell me what I'm missing, to configure Exchange to send emails with changed From addresses to recipients outside the domain? (Or is this even possible at all?) Thanks. UPDATE I have to correct myself: it seems to be working after all. There must be some issue with the mailbox I used for testing. It turned out it's working with other external mailboxes. However, I still have no idea what was different there... Anyways, you can take this as a documentation on how to configure Exchange in such a way ;)

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >