Search Results

Search found 30778 results on 1232 pages for 'private key'.

Page 383/1232 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • How to do dependency Injection and conditional object creation based on type?

    - by Pradeep
    I have a service endpoint initialized using DI. It is of the following style. This end point is used across the app. public class CustomerService : ICustomerService { private IValidationService ValidationService { get; set; } private ICustomerRepository Repository { get; set; } public CustomerService(IValidationService validationService,ICustomerRepository repository) { ValidationService = validationService; Repository = repository; } public void Save(CustomerDTO customer) { if (ValidationService.Valid(customer)) Repository.Save(customer); } Now, With the changing requirements, there are going to be different types of customers (Legacy/Regular). The requirement is based on the type of the customer I have to validate and persist the customer in a different way (e.g. if Legacy customer persist to LegacyRepository). The wrong way to do this will be to break DI and do somthing like public void Save(CustomerDTO customer) { if(customer.Type == CustomerTypes.Legacy) { if (LegacyValidationService.Valid(customer)) LegacyRepository.Save(customer); } else { if (ValidationService.Valid(customer)) Repository.Save(customer); } } My options to me seems like DI all possible IValidationService and ICustomerRepository and switch based on type, which seems wrong. The other is to change the service signature to Save(IValidationService validation, ICustomerRepository repository, CustomerDTO customer) which is an invasive change. Break DI. Use the Strategy pattern approach for each type and do something like: validation= CustomerValidationServiceFactory.GetStratedgy(customer.Type); validation.Valid(customer) but now I have a static method which needs to know how to initialize different services. I am sure this is a very common problem, What is the right way to solve this without changing service signatures or breaking DI?

    Read the article

  • How do I copy a package from Debian to my PPA?

    - by Bernhard Reiter
    I'd like to add the latest gourmet package from Debian sid to our team's PPA so Ubuntu users who would like to run an up-to-date version of Gourmet can add that PPA to their software sources. (Dependency-wise, that shouldn't be much of an issue as pretty much all our current dependencies are already available in all currently supported Ubuntu versions.) I've downloaded the *.dsc file and debian and orig tarballs, and even figured out I could use this for the package's source.changes file. I also downloaded the Debian maintainer's public key so dput can validate the package. I then tried to upload the package to our PPA using dput ppa:~gourmet/ppa gourmet_0.17.3-1_source.changes (I also tried without the tilda.) This seemed to succeed, but I didn't get a confirmation email, and no packages are now displayed at our PPA, which leads me to believe that the package was rejected because the Debian maintainer's key is obviously not among our team members' keys. So what's the easiest way to "copy" a package from Debian (sid) to a Launchpad PPA? Do I really need to rebuild the entire package locally before I can upload it?

    Read the article

  • Cloud services, Public IPs and SIP

    - by Guido N
    I'm trying to run a custom SIP software (which uses JAIN SIP 1.2) on a cloud box. What I'd really like is to have a real public IP aka which is listed by "ifconfig -a" command. This is because atm I don't want to write additional SIP code / add a SIP proxy in order to manage private IP addresses / address translation. I gave Amazon EC2 a go, but as reported here http://stackoverflow.com/questions/10013549/sip-and-ec2-elastic-ips it's not fit for purpose (they do a 1:1 NAT translation between the private IP of the box and its Elastic IP). Does anyone know of a cloud service that provides real static public IP addresses?

    Read the article

  • Ways to organize interface and implementation in C++

    - by Felix Dombek
    I've seen that there are several different paradigms in C++ concerning what goes into the header file and what to the cpp file. AFAIK, most people, especially those from a C background, do: foo.h class foo { private: int mem; int bar(); public: foo(); foo(const foo&); foo& operator=(foo); ~foo(); } foo.cpp #include foo.h foo::bar() { return mem; } foo::foo() { mem = 42; } foo::foo(const foo& f) { mem = f.mem; } foo::operator=(foo f) { mem = f.mem; } foo::~foo() {} int main(int argc, char *argv[]) { foo f; } However, my lecturers usually teach C++ to beginners like this: foo.h class foo { private: int mem; int bar() { return mem; } public: foo() { mem = 42; } foo(const foo& f) { mem = f.mem; } foo& operator=(foo f) { mem = f.mem; } ~foo() {} } foo.cpp #include foo.h int main(int argc, char* argv[]) { foo f; } // other global helper functions, DLL exports, and whatnot Originally coming from Java, I have also always stuck to this second way for several reasons, such as that I only have to change something in one place if the interface or method names change, that I like the different indentation of things in classes when I look at their implementation, and that I find names more readable as foo compared to foo::foo. I want to collect pro's and con's for either way. Maybe there are even still other ways? One disadvantage of my way is of course the need for occasional forward declarations.

    Read the article

  • Windows Firewall failing after 9-12 hours?

    - by routeNpingme
    I have 2 VM servers in the exact same NIC configuration: Server 2003 R2, one NIC connected to private (hardware firewall) network in a 10.x private address space, and one NIC connected straight to public internet. Windows Firewall is enabled for the Public Internet NIC only. Now, what doesn't make sense - this fails generally after 9-12 hours. It's not exact, but once or twice a day, traffic will just stop on the Internet NIC. No event log entries when it happens, and restarting the Windows Firewall service as well as stopping or restarting IPSec Services (just for fun) has no effect. Once the server is rebooted, everything is fine again for another 1/2 day. Any suggestions?

    Read the article

  • How stoper one annimation model on XNA?

    - by Mehdi Bugnard
    I met a Difficulty for one stoper annimation. Everything works great starter for the animation. But I do not see how stoper and can continue the annimation paused. The "animationPlayer.StartClip (clip)" is used to choke the annimation but impossible to find a way to stoper Thans's a lot Here is my code to use. protected override void LoadContent() { //Model - Player model_player = Content.Load<Model>("Models\\Player\\models"); // Look up our custom skinning information. SkinningData skinningData = model_player.Tag as SkinningData; if (skinningData == null) throw new InvalidOperationException ("This model does not contain a SkinningData tag."); // Create an animation player, and start decoding an animation clip. animationPlayer = new AnimationPlayer(skinningData); AnimationClip clip = skinningData.AnimationClips["ArmLowAction_006"]; animationPlayer.StartClip(clip); } protected overide update(GameTime gameTime) { KeyboardState key = Keyboard.GetState(); // If player don't move -> stop anim if (!key.IsKeyDown(Keys.W) && !keyStateOld.IsKeyUp(Keys.S) && !keyStateOld.IsKeyUp(Keys.A) && !keyStateOld.IsKeyUp(Keys.D)) { //animation stop ? not exist ? animationPlayer.Stop(); isPlayerStop = true; } else { if(isPlayerStop == true) { isPlayerStop = false; animationPlayer.StartClip(Clip); } }

    Read the article

  • Ubuntu Server login not recognizing the keyboard after entering username.

    - by Jeff Malewski
    I'm having similar issues with logging into ubuntu server. chief problem is that once I enter my user name & hit enter, I can't enter anything ffor my password - it won't accept any keystrokes until I press Ctrl+any key. Once I've pressed Ctrl+ any key, I'm able to type again, but have never been able to enter any more than 3 characters before the 60 sec time limit. This problem is present on fresh installs of both 10.04 & 9.10. Part of the problem is lkely to be my antique pc which is an old Emachines Trigems I850 based mbd and an equally ancient Nvidia 4x AGP video card. Initially I was going to install Ubuntu 10.10, but with ORCA running with both screen reader and full screen magnification crashed the system & smoked a stick of Rambus memory. Is there any fix to this problem? Jeff

    Read the article

  • Seasgate GoFlex NAS + Horrible Speed = Bad Experience

    - by Jon H
    I am having issues with transfer speeds from my desktop PC to my NAS. I have my NAS hooked up to a Gigabit Gateway as well as my Desktop with Cat 5e. I see up to 4.0 MB/Second Transfer Rates, the normal is about 2.5 MB/Seconds. There is 3 Partitions on my NAS, Public, Private, Backup. When I transfer from Private to Public I see the speeds above. If its under the same partition almost instant. I was wondering if the speeds I am seeing is in due to my Computer or the NAS. I was looking into building my own Media Server in due to these horrible speeds. Is their anything I can do in the mean time to speed this up? Motherboard = M3970AM-HP (Angelica) Processor = AMD FX 6100 Ram = 10GB PC3-10600 MB/sec Hard Drive (1) = 1.5TB SATA 3.0GB 5400RPM Hard Drive (2) = 120GB SATA SSD NAS = Seagate 3TB Go Flex Home Connection (1) = 1000 Base T Connection (2) = Wireless N

    Read the article

  • Postfix to deliver mail to a virtual address mailbox

    - by Chloe
    Postfix version 2.6.6, Dovecot Version 2.0.9 I want to setup Postfix + Dovecot. Dovecot seems to be working. I can authenticate. However, the mailbox is empty! Nothing will get delivered! I followed many tutorials on Postfix + Dovecot but they seem to want to complicate things by using Dovecot LDA or MySQL. I just want it to be very simple and having Postfix deliver to the virtual mail boxes are fine. I don't need MySQL either. I already set up a custom password file that Dovecot uses for authentication and I can login to POP3 with SSL. I can see from the logs that Postfix is delivering to the system user accounts (the catch-all), instead of the virtual users that I set up in Dovecot. The SMTP + SSL authentication seems to work also. I can also see from the logs that Dovecot is checking the correct virtual mail folder. I just need to figure out how to get Postfix to deliver to the virtual mail boxes. I have the following which I believe are relevant. Let me know what other settings you need to see: alias_maps = hash:/etc/aliases mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = xxx.com myhostname = mail.xxx.com mynetworks = 99.99.99.99, 99.99.99.99 myorigin = $mydomain relay_domains = $mydestination, xxx.com, domain2.net, domain3.com sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = reject_non_fqdn_sender reject_non_fqdn_recipient reject_unknown_recipient_domain permit_sasl_authenticated check_relay_domains smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_sender_restrictions = check_sender_mx_access cidr:/etc/postfix/bogus_mx reject_invalid_hostname reject_unknown_sender_domain reject_non_fqdn_sender virtual_mailbox_base = /var/spool/vmail virtual_mailbox_domains = xxx.com, domain2.net, domain3.com virtual_minimum_uid = 444 Postfix master.cf: submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_sasl_type=dovecot -o smtpd_sasl_path=private/auth -o smtpd_sasl_security_options=noanonymous -o smtpd_sasl_local_domain=$myhostname -o smtpd_client_restrictions=permit_sasl_authenticated,reject -o smtpd_sender_login_maps=hash:/etc/postfix/virtual -o smtpd_sender_restrictions=reject_sender_login_mismatch -o smtpd_recipient_restrictions=reject_non_fqdn_recipient,reject_unknown_recipient_domain,permit_sasl_authenticated,reject Dovecot related: mail_location = maildir:~/Maildir passdb { args = /etc/dovecot/users.conf driver = passwd-file } service auth { unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix } } The virtual mail user: vmail:x:444:99:virtual mail users:/var/spool/vmail:/sbin/nologin Here is the /var/log/maillog when I try to send something to myself: Oct 25 22:10:05 308321 postfix/smtpd[2200]: connect from user-999.cable.mindspring.com[99.99.99.99] Oct 25 22:10:05 308321 postfix/smtpd[2200]: D224BD4753: client=user-999.cable.mindspring.com[99.99.99.99], sasl_method=LOGIN, [email protected] Oct 25 22:10:06 308321 postfix/cleanup[2207]: D224BD4753: message-id=<7DC3C163CFFC483AB6226F8D3D9969D2@dumbopc> Oct 25 22:10:06 308321 postfix/qmgr[2168]: D224BD4753: from=<[email protected]>, size=1385, nrcpt=1 (queue active) Oct 25 22:10:06 308321 postfix/smtpd[2200]: disconnect from user-999.cable.mindspring.com[99.99.99.99] Oct 25 22:10:06 308321 postfix/local[2208]: D224BD4753: to=<[email protected]>, orig_to=<[email protected]>, relay=local, delay=1.1, delays=0.53/0.02/0/0.51, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 22:10:06 308321 postfix/qmgr[2168]: D224BD4753: removed

    Read the article

  • Setting Up SNI with Apache 2.2.12 and openssl

    - by CCG121
    I am running Apache 2.2.12 and openssl 0.9.8g all of my Apache are in /etc/apache2/sites-available/default and i have 2 domains with certificates www.site.com & d7.site.com my <VirtualHost *:443> DocumentRoot /var/www/domain.com ServerAdmin [email protected] ServerName www.name.tld SSLStrictSNIVHostCheck off SSLVerifyClient None SSLEngine on SSLCertificateFile /var/www/sslcerts/name.tld/www_name_tld.crt SSLCertificateKeyFile /var/www/sslcerts/name.tld/private.key </VirtualHost> <VirtualHost *:443> DocumentRoot /var/www/d7 ServerAdmin [email protected] ServerName d7.domain.tld SSLStrictSNIVHostCheck off SSLVerifyClient None SSLEngine on SSLCertificateFile /var/www/sslcerts/d7.domain.tld/server.crt SSLCertificateKeyFile /var/www/sslcerts/d7.domain.tld/private.key </VirtualHost>

    Read the article

  • State Design Pattern .NET Code Sample

    using System;using System.Collections.Generic;using System.Linq;using System.Text;class Program{ static void Main(string[] args) { Person p1 = new Person("P1"); Person p2 = new Person("P2"); p1.EatFood(); p2.EatFood(); p1.Vomit(); p2.Vomit(); }}interface StomachState{ void Eat(Person p); void Vomit(Person p);}class StomachFull : StomachState{ public void Eat(Person p) { Console.WriteLine("Can't eat more."); } public void Vomit(Person p) { Console.WriteLine("I've just Vomited."); p.StomachState = new StomachEmpty(); }}class StomachEmpty : StomachState{ public void Eat(Person p) { Console.WriteLine("I've just had food."); p.StomachState = new StomachFull(); } public void Vomit(Person p) { Console.WriteLine("Nothing to Vomit."); }}class Person{ private StomachState stomachState; private String personName; public Person(String personName) { this.personName = personName; StomachState = new StomachEmpty(); } public StomachState StomachState { get { return stomachState; } set { stomachState = value; Console.WriteLine(personName + " Stomach State Changed to " + StomachState.GetType().Name); Console.WriteLine("***********************************************\n"); } } public Person(StomachState StomachState) { this.StomachState = StomachState; } public void EatFood() { StomachState.Eat(this); } public void Vomit() { StomachState.Vomit(this); }} span.fullpost {display:none;}

    Read the article

  • Cannot write to registry while installing Microsoft Access 2010 - Error 1406

    - by Rillanon
    While installing I get an error: Microsoft Access 2010 encountered an error during setup. Error 1406. Setup cannot write the value to the registry key \Software\Classes\Interface{000C036F-0000-0000-C000-000000000046}\ProxyStubClsid. Verify that have sufficient permissions to access the registry or contact Microsoft Product Support Services (PSS) for assitance. I went to regedit to check on the key that the error was talking about but when I clicked on it it says file not found. I'm using 64bit Windows 7 Ultimate. Any ideas?

    Read the article

  • Why The Athene Group Chose Fusion CRM

    - by Tony Berk
    A guest post by Vikas Bhambri, Managing Partner, The Athene Group This year, The Athene Group (www.theathenegroup.com) celebrated our tenth anniversary. The company has accomplished a lot in ten years overcoming a number of hurdles and challenges to have grown organically to a 150+ person global company with offices in the US, UK, and India and customers in the US, Canada, and Europe. Now more than ever with the current global landscape from an economic and competitive standpoint it was vital that we make some changes to remain successful for the next ten years. There were two key initiatives that we discussed internally that would enable us to successfully accomplish this – collaboration and the concept of “insight to action”. With our existing Oracle CRM On Demand platform we had components of this but not the full depth and breadth that we were looking for. When we started to discuss Fusion CRM we immediately saw several next generation tools that would embrace these two objectives. For a consulting and development organization the collaboration required between business development and consulting delivery is as important as the collaboration required during the projects between the project delivery and account management teams. The Activity Streams functionality in Fusion CRM immediately addressed the communication of key discussion topics and exchanges around our clients. Of course when we saw the Oracle Social Network (which is part of our Fusion CRM roadmap) we were blown away. The combination OSN and our CRM is going to make us more effective as we discuss and work cohesively on client engagements – ensuring mutual success for both Athene and our clients. When we looked at “insight to action” we saw that we had a great platform when folks were at their desks, unfortunately a lot of our business development and consulting folks are on the road. The Fusion Mobile Sales and Fusion Outlook Desktop provide information to our teams when they are on the go. So that they can provide real-time information and react to real-time information provided by their peers. We are in the early stages of our transformative experience with Fusion CRM but we believe the platform along with our people and processes are going to help us achieve our goals in the future.

    Read the article

  • Disabling the command enter shortcut on Mac Entourage.

    - by Bruce
    It seems like disabling a shortcut should not be such a big deal, but I cannot seem to be able to do it for any shortcuts and specifically not for the combination that I keep hitting by mistake, every single day. The smaller space bar makes it very easy to hit the command key by mistake, and the return is a commonly used key when typing. I keep sending important e-mails before I am done typing, or worse, before I am done editing. I do not necessarily want to disable all the shortcuts, but that one for sure. The choices for changing anything in Entourage seem very limited. [Entourage for Mac 2008. Version 12.2.8. (101117) ESD] It is easy enough just to hit "send." This short cut causes a lot more trouble than it saves. Help.

    Read the article

  • Multiple Copies of Windows Calculator

    - by Brian Boatright
    Just did a clean install of Win7 x64. I have a Microsoft Ergo Keyboard 4000 and use the calculator key a lot. Previously I could hit it and get multiple copies of calculator to popup. Now it will only show one copy of calculator. I tried adding a shortcut to the calculator app but it has the same limitation. However if I click the calculator icon it will open a new one each time. How can I fix this so each time I press the calculator key it will open a new copy?

    Read the article

  • In Nginx, can I handle both a location:url or a content-type: text/html response from memcached?

    - by Sean Foo
    I'm setting up an nginx - apache reverse proxy where nginx handles the static files and apache the dynamic. I have a search engine and depending on search parameter I either directly forward the user to the page they are looking for or provide a set of search results. I cache these results in memcached as key:/search.cgi?q=foo value: LOCATION:http://www.example.com/foo.html and key:/search.cgi?q=bar value: CONTENT-TYPE: text/html <html> .... .... </html> I can pull the "Content-type...." values out of memcached using nginx and send them to the user, but I can't quite figure out how to handle a returned value like "Location..." Can I?

    Read the article

  • How to make schema dumps comparable between Windows and Linux

    - by Jonathan
    I have two systems running, one on linux and the other on windows. From the linux box, I ran pg_dump against both systems and dumped the schema. pg_dump command: pg_dump -h HOST -U USER -s -f /tmp/out.sql DB_NAME After I removed all of the "--" comments, I diffed the files together. Diff output snippet, linux compared to windows: - ADD CONSTRAINT sys_c004775 FOREIGN KEY (ruleid) REFERENCES rule(ruleid); + ADD CONSTRAINT sys_c004775 FOREIGN KEY (ruleid) REFERENCES "rule"(ruleid); The linux dump does not quote any entities and windows does. Is this a function of some encoding or just of a difference between windows and linux? Is there an option in pg_dump to make the output more consistent?

    Read the article

  • Per-vertex animation with VBOs: Stream each frame or use index offset per frame?

    - by charstar
    Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will be at different poses/keyframes at the same time). Assume color and texture coordinate buffers are static. Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs). Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? Are there other methods that I am missing?

    Read the article

  • Ninject/DI: How to correctly pass initialisation data to injected type at runtime

    - by MrLane
    I have the following two classes: public class StoreService : IStoreService { private IEmailService _emailService; public StoreService(IEmailService emailService) { _emailService = emailService; } } public class EmailService : IEmailService { } Using Ninject I can set up bindings no problem to get it to inject a concrete implementation of IEmailService into the StoreService constructor. StoreService is actually injected into the code behind of an ASP.NET WebForm as so: [Ninject.Inject] public IStoreService StoreService { get; set; } But now I need to change EmailService to accept an object that contains SMTP related settings (that are pulled from the ApplicationSettings of the Web.config). So I changed EmailService to now look like this: public class EmailService : IEmailService { private SMTPSettings _smtpSettings; public void SetSMTPSettings(SMTPSettings smtpSettings) { _smtpSettings = smtpSettings; } } Setting SMTPSettings in this way also requires it to be passed into StoreService (via another public method). This has to be done in the Page_Load method in the WebForms code behind (I only have access to the Settings class in the UI layer). With manual/poor mans DI I could pass SMTPSettings directly into the constructor of EmailService and then inject EmailService into the StoreService constructor. With Ninject I don't have access to the instances of injected types outside of the objects they are injected to, so I have to set their data AFTER Ninject has already injected them via a separate public setter method. This to me seems wrong. How should I really be solving this scenario?

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • sshd: How to enable PAM authentication for specific users under

    - by Brad
    I am using sshd, and allow logins with public key authentication. I want to allow select users to log in with a PAM two-factor authentication module. Is there any way I can allow PAM two-factor authentication for a specifc user? I don't want users - By the same token - I only want to enable password authentication for specific accounts. I want my SSH daemon to reject the password authentication attempts to thwart would-be hackers into thinking that I will not accept password authentication - except for the case in which someone knows my heavily guarded secret account, which is password enabled. I want to do this for cases in which my SSH clients will not let me do either secret key, or two-factor authentication.

    Read the article

  • Mac | Port Forwarding for Remote Desktop

    - by Vaibhav Bajpai
    I have two Mac notebooks at home, I have assigned them static private IPs. I have also set my router to a DynDNS address, which updates everytime my router gets a new public IP. I have enabled Screen Sharing on both notebooks. I can successfully goto my router webpage using the DynDNS address. I understand I need to port-forward to get Screen Sharing to work from outside. Lets assume, notebooks have private IP 192.168.1.2 and 192.168.1.3 I am kind of lost here, would appreciate some help (I need to be able remote desktop to both notebooks)

    Read the article

  • Experiencing the New Social Enterprise

    - by kellsey.ruppel
    Social media and networking tools, popularly known as Web 2.0 technologies, are rapidly transforming user expectations of enterprise systems. Many organizations are investing in these new tools to cultivate a modern user experience in an “Enterprise 2.0” environment that unlocks the full potential of traditional IT systems and fosters collaboration in key business processes. Here are some key points and takeaways from some of the keynotes yesterday at the Enterprise 2.0 Conference: Social networks continue to forge complex connections between people, processes, and content, facilitating collaboration and the sharing of information The customer of today lives inside of Facebook, on your web, or has an app for that – and they have a question – and want an answer NOW Empowered employees are able to connect to colleagues, build relationships, develop expertise, self-select projects of interest to them, and expand skill sets well beyond their formal roles A fundamental promise of Enterprise 2.0 is that ideas will be generated and shared by everyone across the organization, leading to increased innovation, agility, and competitive advantage How well is your organizating delivering on these concepts? Are you able to successfully bring together people, processes and content? Are you providing the social tools your employees want and need? Are you experiencing the new social enterprise?

    Read the article

  • Dovecot and StartSSL problems with issuer

    - by knoim
    I am using dovecot (1) and trying to get my StartSSL certificate running. ssl_key_file points to my private key I tried pointing ssl_cert_file to my public key, with and without using the class1 certificate from http://www.startssl.com/certs/sub.class1.server.ca.pem as ssl_ca_file aswell as combing them with cat publickey sub.class1.server.ca.pem chained My mail client keeps telling me the certificate has no issuer, but doing openssl x509 on my public certificate tells me it is C=IL, O=StartCom Ltd., OU=Secure Digital Certificate Signing, CN=StartCom Class 1 Primary Intermediate Server CA My option for the CSR were: openssl req -new -newkey rsa:4096 -nodes Dovecot's log doesn't mention any problems. EDIT: Doesn't seem to be a problem with dovecot. I am having the same problem with postfix. openssl verify gives me the same error.

    Read the article

  • What is the lease irriating printer manufacturer?

    - by aireq
    Currently I have a Lexmark all in one printer/scanner which has some of the worse drivers I've seen for a printer. The installation takes forever. Then once it's installed the printer will only work if I keep the "Lexmark Productivity Studio" running in my system tray. Then later after I've scanned something 99% of the time the "Save to PDF" button doesn't do anything when I click it. It is also a wireless printer, but of course the only way to set any of the wireless settings is during the driver setup. So if my WEP key changes then I have to go off and reinstall the entire printer driver. Lately I tried refilling one of the ink cartridges with a key I bought off amazon, and now both the printer and the drivers keep complaining about being out of "Official Lexmark Ink" This comic from The Oatmeal pretty much sums up my feelings about consumer printers and their drivers. This question is, of course, pretty subjective but I'd like to know what (if any) consumer printer brands actual provide quality drivers and software with their printers.

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >