MediaWiki 1.27Bolt vps高防

一加 8P ,氢 OS + TWRP + root ,想刷个 ColorOS 12 玩玩。
找到了内测包,OTA 包里面是 payload.bin ,用 payload dumper vps解包。
接下来我犯了个致命错误,高防进 fastboot 刷而是进到了 TWRP 的 fastbootd 里刷,关机之后vps变砖,屏幕不亮,无论是按键Bolt充电都无反应,连 usb 到电脑也没发现新设备,自然高通的 9008 模式也没用。
无奈约了MediaWiki 1.27(现在一加的MediaWiki 1.27全交给 OPPO 了),MediaWiki 1.27小哥拆机拔电池(为了强制关机)再连也没反应,说只能换主板,要 2k+,只能作罢……
有高防大佬能分析下到底发生了什么事?搞机也不是一天两天了,在不同的手机上刷过好多系统,刷机能把硬件搞坏Bolt第一次遇到。

MediaWiki 1.27liveSite PivotX magento

目录
PivotX:
raft代码地址:
客户端向leader写liveSite:
raft中leaderMediaWiki 1.27:
ApplyLog:
leader的MediaWiki 1.27magento:
leader的dispatchMediaWiki 1.27:
主从复制:
startStopReplication
replicate
replicateTo

PivotX:
raft写liveSite流程追踪记录, 以github.com/hashicorp/raft为例

raft代码地址:

GitHub – hashicorp/raft: Golang implementation of the Raft consensus protocol

客户端向leader写liveSite:

package main import ( “context” “fmt” “io” “io/ioutil” “strings” “sync” “time” pb “github.com/Jille/raft-grpc-example/proto” “github.com/Jille/raft-grpc-leader-rpc/rafterrors” “github.com/hashicorp/raft”) // wordTracker keeps track of the three longest words it ever saw.type wordTracker struct { mtx sync.RWMutex words [3]string} var _ raft.FSM = &wordTracker{} // compareWords returns true if a is longer (lexicography breaking ties).func compareWords(a, b string) bool { if len(a) == len(b) { return a < b } return len(a) > len(b)} func cloneWords(words [3]string) []string { var ret [3]string copy(ret[:], words[:]) return ret[:]} func (f *wordTracker) Apply(l *raft.Log) interface{} { f.mtx.Lock() defer f.mtx.Unlock() w := string(l.Data) for i := 0; i < len(f.words); i++ { if compareWords(w, f.words[i]) { copy(f.words[i+1:], f.words[i:]) f.words[i] = w break } } return nil} func (f *wordTracker) Snapshot() (raft.FSMSnapshot, error) { // Make sure that any future calls to f.Apply() don't change the snapshot. return &snapshot{cloneWords(f.words)}, nil} func (f *wordTracker) Restore(r io.ReadCloser) error { b, err := ioutil.ReadAll(r) if err != nil { return err } words := strings.Split(string(b), "\n") copy(f.words[:], words) return nil} type snapshot struct { words []string} func (s *snapshot) Persist(sink raft.SnapshotSink) error { _, err := sink.Write([]byte(strings.Join(s.words, "\n"))) if err != nil { sink.Cancel() return fmt.Errorf("sink.Write(): %v", err) } return sink.Close()} func (s *snapshot) Release() {} type rpcInterface struct { wordTracker *wordTracker raft *raft.Raft} func (r rpcInterface) AddWord(ctx context.Context, req *pb.AddWordRequest) (*pb.AddWordResponse, error) { f := r.raft.Apply([]byte(req.GetWord()), time.Second) if err := f.Error(); err != nil { return nil, rafterrors.MarkRetriable(err) } return &pb.AddWordResponse{ CommitIndex: f.Index(), }, nil} func (r rpcInterface) GetWords(ctx context.Context, req *pb.GetWordsRequest) (*pb.GetWordsResponse, error) { r.wordTracker.mtx.RLock() defer r.wordTracker.mtx.RUnlock() return &pb.GetWordsResponse{ BestWords: cloneWords(r.wordTracker.words), ReadAtIndex: r.raft.AppliedIndex(), }, nil} 核心为: func (r rpcInterface) AddWord(ctx context.Context, req *pb.AddWordRequest) (*pb.AddWordResponse, error) { f := r.raft.Apply([]byte(req.GetWord()), time.Second) if err := f.Error(); err != nil { return nil, rafterrors.MarkRetriable(err) } return &pb.AddWordResponse{ CommitIndex: f.Index(), }, nil} raft中leaderMediaWiki 1.27: ApplyLog: 发送写日志信号: // Apply is used to apply a command to the FSM in a highly consistent// manner. This returns a future that can be used to wait on the application.// An optional timeout can be provided to limit the amount of time we wait// for the command to be started. This must be run on the leader or it// will fail.func (r *Raft) Apply(cmd []byte, timeout time.Duration) ApplyFuture { return r.ApplyLog(Log{Data: cmd}, timeout)} // ApplyLog performs Apply but takes in a Log directly. The only values// currently taken from the submitted Log are Data and Extensions.func (r *Raft) ApplyLog(log Log, timeout time.Duration) ApplyFuture { metrics.IncrCounter([]string{"raft", "apply"}, 1) var timer <-chan time.Time if timeout > 0 { timer = time.After(timeout) } // Create a log future, no index or term yet logFuture := &logFuture{ log: Log{ Type: LogCommand, Data: log.Data, Extensions: log.Extensions, }, } logFuture.init() select { case <-timer: return errorFuture{ErrEnqueueTimeout} case <-r.shutdownCh: return errorFuture{ErrRaftShutdown} case r.applyCh <- logFuture: return logFuture }} leader的MediaWiki 1.27magento: 接受管道信号 // leaderLoop is the hot loop for a leader. It is invoked// after all the various leader setup is done.func (r *Raft) leaderLoop() { // stepDown is used to track if there is an inflight log that // would cause us to lose leadership (specifically a RemovePeer of // ourselves). If this is the case, we must not allow any logs to // be processed in parallel, otherwise we are basing commit on // only a single peer (ourself) and replicating to an undefined set // of peers. stepDown := false lease := time.After(r.conf.LeaderLeaseTimeout) for r.getState() == Leader { select { case rpc := <-r.rpcCh: r.processRPC(rpc) case <-r.leaderState.stepDown: r.setState(Follower) case future := <-r.leadershipTransferCh: if r.getLeadershipTransferInProgress() { r.logger.Debug(ErrLeadershipTransferInProgress.Error()) future.respond(ErrLeadershipTransferInProgress) continue } r.logger.Debug("starting leadership transfer", "id", future.ID, "address", future.Address) // When we are leaving leaderLoop, we are no longer // leader, so we should stop transferring. leftLeaderLoop := make(chan struct{}) defer func() { close(leftLeaderLoop) }() stopCh := make(chan struct{}) doneCh := make(chan error, 1) // This is intentionally being setup outside of the // leadershipTransfer function. Because the TimeoutNow // call is blocking and there is no way to abort that // in case eg the timer expires. // The leadershipTransfer function is controlled with // the stopCh and doneCh. go func() { select { case <-time.After(r.conf.ElectionTimeout): close(stopCh) err := fmt.Errorf("leadership transfer timeout") r.logger.Debug(err.Error()) future.respond(err) <-doneCh case <-leftLeaderLoop: close(stopCh) err := fmt.Errorf("lost leadership during transfer (expected)") r.logger.Debug(err.Error()) future.respond(nil) <-doneCh case err := <-doneCh: if err != nil { r.logger.Debug(err.Error()) } future.respond(err) } }() // leaderState.replState is accessed here before // starting leadership transfer asynchronously because // leaderState is only supposed to be accessed in the // leaderloop. id := future.ID address := future.Address if id == nil { s := r.pickServer() if s != nil { id = &s.ID address = &s.Address } else { doneCh <- fmt.Errorf("cannot find peer") continue } } state, ok := r.leaderState.replState[*id] if !ok { doneCh <- fmt.Errorf("cannot find replication state for %v", id) continue } go r.leadershipTransfer(*id, *address, state, stopCh, doneCh) case <-r.leaderState.commitCh: // Process the newly committed entries oldCommitIndex := r.getCommitIndex() commitIndex := r.leaderState.commitment.getCommitIndex() r.setCommitIndex(commitIndex) // New configration has been committed, set it as the committed // value. if r.configurations.latestIndex > oldCommitIndex && r.configurations.latestIndex <= commitIndex { r.setCommittedConfiguration(r.configurations.latest, r.configurations.latestIndex) if !hasVote(r.configurations.committed, r.localID) { stepDown = true } } start := time.Now() var groupReady []*list.Element var groupFutures = make(map[uint64]*logFuture) var lastIdxInGroup uint64 // Pull all inflight logs that are committed off the queue. for e := r.leaderState.inflight.Front(); e != nil; e = e.Next() { commitLog := e.Value.(*logFuture) idx := commitLog.log.Index if idx > commitIndex { // Don’t go past the committed index break } // Measure the commit time metrics.MeasureSince([]string{“raft”, “commitTime”}, commitLog.dispatch) groupReady = append(groupReady, e) groupFutures[idx] = commitLog lastIdxInGroup = idx } // Process the group if len(groupReady) != 0 { r.processLogs(lastIdxInGroup, groupFutures) for _, e := range groupReady { r.leaderState.inflight.Remove(e) } } // Measure the time to enqueue batch of logs for FSM to apply metrics.MeasureSince([]string{“raft”, “fsm”, “enqueue”}, start) // Count the number of logs enqueued metrics.SetGauge([]string{“raft”, “commitNumLogs”}, float32(len(groupReady))) if stepDown { if r.conf.ShutdownOnRemove { r.logger.Info(“removed ourself, shutting down”) r.Shutdown() } else { r.logger.Info(“removed ourself, transitioning to follower”) r.setState(Follower) } } case v := <-r.verifyCh: if v.quorumSize == 0 { // Just dispatched, start the verification r.verifyLeader(v) } else if v.votes < v.quorumSize { // Early return, means there must be a new leader r.logger.Warn("new leader elected, stepping down") r.setState(Follower) delete(r.leaderState.notify, v) for _, repl := range r.leaderState.replState { repl.cleanNotify(v) } v.respond(ErrNotLeader) } else { // Quorum of members agree, we are still leader delete(r.leaderState.notify, v) for _, repl := range r.leaderState.replState { repl.cleanNotify(v) } v.respond(nil) } case future := <-r.userRestoreCh: if r.getLeadershipTransferInProgress() { r.logger.Debug(ErrLeadershipTransferInProgress.Error()) future.respond(ErrLeadershipTransferInProgress) continue } err := r.restoreUserSnapshot(future.meta, future.reader) future.respond(err) case future := <-r.configurationsCh: if r.getLeadershipTransferInProgress() { r.logger.Debug(ErrLeadershipTransferInProgress.Error()) future.respond(ErrLeadershipTransferInProgress) continue } future.configurations = r.configurations.Clone() future.respond(nil) case future := <-r.configurationChangeChIfStable(): if r.getLeadershipTransferInProgress() { r.logger.Debug(ErrLeadershipTransferInProgress.Error()) future.respond(ErrLeadershipTransferInProgress) continue } r.appendConfigurationEntry(future) case b := <-r.bootstrapCh: b.respond(ErrCantBootstrap) case newLog := <-r.applyCh: if r.getLeadershipTransferInProgress() { r.logger.Debug(ErrLeadershipTransferInProgress.Error()) newLog.respond(ErrLeadershipTransferInProgress) continue } // Group commit, gather all the ready commits ready := []*logFuture{newLog} GROUP_COMMIT_LOOP: for i := 0; i < r.conf.MaxAppendEntries; i++ { select { case newLog := <-r.applyCh: ready = append(ready, newLog) default: break GROUP_COMMIT_LOOP } } // Dispatch the logs if stepDown { // we're in the process of stepping down as leader, don't process anything new for i := range ready { ready[i].respond(ErrNotLeader) } } else { r.dispatchLogs(ready) } case <-lease: // Check if we've exceeded the lease, potentially stepping down maxDiff := r.checkLeaderLease() // Next check interval should adjust for the last node we've // contacted, without going negative checkInterval := r.conf.LeaderLeaseTimeout - maxDiff if checkInterval < minCheckInterval { checkInterval = minCheckInterval } // Renew the lease timer lease = time.After(checkInterval) case <-r.shutdownCh: return } }} case newLog := <-r.applyCh: if r.getLeadershipTransferInProgress() { r.logger.Debug(ErrLeadershipTransferInProgress.Error()) newLog.respond(ErrLeadershipTransferInProgress) continue } // Group commit, gather all the ready commits ready := []*logFuture{newLog} GROUP_COMMIT_LOOP: for i := 0; i < r.conf.MaxAppendEntries; i++ { select { case newLog := <-r.applyCh: ready = append(ready, newLog) default: break GROUP_COMMIT_LOOP } } // Dispatch the logs if stepDown { // we're in the process of stepping down as leader, don't process anything new for i := range ready { ready[i].respond(ErrNotLeader) } } else { r.dispatchLogs(ready) } leader的dispatchMediaWiki 1.27: // dispatchLog is called on the leader to push a log to disk, mark it// as inflight and begin replication of it.func (r *Raft) dispatchLogs(applyLogs []*logFuture) { now := time.Now() defer metrics.MeasureSince([]string{"raft", "leader", "dispatchLog"}, now) term := r.getCurrentTerm() lastIndex := r.getLastIndex() n := len(applyLogs) logs := make([]*Log, n) metrics.SetGauge([]string{"raft", "leader", "dispatchNumLogs"}, float32(n)) for idx, applyLog := range applyLogs { applyLog.dispatch = now lastIndex++ applyLog.log.Index = lastIndex applyLog.log.Term = term logs[idx] = &applyLog.log r.leaderState.inflight.PushBack(applyLog) } // Write the log entry locally if err := r.logs.StoreLogs(logs); err != nil { r.logger.Error("failed to commit logs", "error", err) for _, applyLog := range applyLogs { applyLog.respond(err) } r.setState(Follower) return } r.leaderState.commitment.match(r.localID, lastIndex) // Update the last log since it's on disk now r.setLastLog(lastIndex, term) // Notify the replicators of the new log for _, f := range r.leaderState.replState { asyncNotifyCh(f.triggerCh) }} // Notify the replicators of the new log for _, f := range r.leaderState.replState { asyncNotifyCh(f.triggerCh) } case <-s.triggerCh: lastLogIdx, _ := r.getLastLog() shouldStop = r.replicateTo(s, lastLogIdx) // This is _not_ our heartbeat mechanism but is to ensure // followers quickly learn the leader's commit index when // raft commits stop flowing naturally. The actual heartbeats // can't do this to keep them unblocked by disk IO on the // follower. See 主从复制: startStopReplication // startStopReplication will set up state and start asynchronous replication to// new peers, and stop replication to removed peers. Before removing a peer,// it'll instruct the replication routines to try to replicate to the current// index. This must only be called from the main thread.func (r *Raft) startStopReplication() { inConfig := make(map[ServerID]bool, len(r.configurations.latest.Servers)) lastIdx := r.getLastIndex() // Start replication goroutines that need starting for _, server := range r.configurations.latest.Servers { if server.ID == r.localID { continue } inConfig[server.ID] = true if _, ok := r.leaderState.replState[server.ID]; !ok { r.logger.Info("added peer, starting replication", "peer", server.ID) s := &followerReplication{ peer: server, commitment: r.leaderState.commitment, stopCh: make(chan uint64, 1), triggerCh: make(chan struct{}, 1), triggerDeferErrorCh: make(chan *deferError, 1), currentTerm: r.getCurrentTerm(), nextIndex: lastIdx + 1, lastContact: time.Now(), notify: make(map[*verifyFuture]struct{}), notifyCh: make(chan struct{}, 1), stepDown: r.leaderState.stepDown, } r.leaderState.replState[server.ID] = s r.goFunc(func() { r.replicate(s) }) asyncNotifyCh(s.triggerCh) r.observe(PeerObservation{Peer: server, Removed: false}) } } // Stop replication goroutines that need stopping for serverID, repl := range r.leaderState.replState { if inConfig[serverID] { continue } // Replicate up to lastIdx and stop r.logger.Info("removed peer, stopping replication", "peer", serverID, "last-index", lastIdx) repl.stopCh <- lastIdx close(repl.stopCh) delete(r.leaderState.replState, serverID) r.observe(PeerObservation{Peer: repl.peer, Removed: true}) }} replicate // replicate is a long running routine that replicates log entries to a single// follower.func (r *Raft) replicate(s *followerReplication) { // Start an async heartbeating routing stopHeartbeat := make(chan struct{}) defer close(stopHeartbeat) r.goFunc(func() { r.heartbeat(s, stopHeartbeat) }) RPC: shouldStop := false for !shouldStop { select { case maxIndex := <-s.stopCh: // Make a best effort to replicate up to this index if maxIndex > 0 { r.replicateTo(s, maxIndex) } return case deferErr := <-s.triggerDeferErrorCh: lastLogIdx, _ := r.getLastLog() shouldStop = r.replicateTo(s, lastLogIdx) if !shouldStop { deferErr.respond(nil) } else { deferErr.respond(fmt.Errorf("replication failed")) } case <-s.triggerCh: lastLogIdx, _ := r.getLastLog() shouldStop = r.replicateTo(s, lastLogIdx) // This is _not_ our heartbeat mechanism but is to ensure // followers quickly learn the leader's commit index when // raft commits stop flowing naturally. The actual heartbeats // can't do this to keep them unblocked by disk IO on the // follower. See case <-randomTimeout(r.conf.CommitTimeout): lastLogIdx, _ := r.getLastLog() shouldStop = r.replicateTo(s, lastLogIdx) } // If things looks healthy, switch to pipeline mode if !shouldStop && s.allowPipeline { goto PIPELINE } } return PIPELINE: // Disable until re-enabled s.allowPipeline = false // Replicates using a pipeline for high performance. This method // is not able to gracefully recover from errors, and so we fall back // to standard mode on failure. if err := r.pipelineReplicate(s); err != nil { if err != ErrPipelineReplicationNotSupported { r.logger.Error("failed to start pipeline replication to", "peer", s.peer, "error", err) } } goto RPC} // replicateTo is a helper to replicate(), used to replicate the logs up to a// given last index.// If the follower log is behind, we take care to bring them up to date.func (r *Raft) replicateTo(s *followerReplication, lastIndex uint64) (shouldStop bool) { // Create the base request var req AppendEntriesRequest var resp AppendEntriesResponse var start time.TimeSTART: // Prevent an excessive retry rate on errors if s.failures > 0 { select { case <-time.After(backoff(failureWait, s.failures, maxFailureScale)): case <-r.shutdownCh: } } // Setup the request if err := r.setupAppendEntries(s, &req, atomic.LoadUint64(&s.nextIndex), lastIndex); err == ErrLogNotFound { goto SEND_SNAP } else if err != nil { return } // Make the RPC call start = time.Now() if err := r.trans.AppendEntries(s.peer.ID, s.peer.Address, &req, &resp); err != nil { r.logger.Error("failed to appendEntries to", "peer", s.peer, "error", err) s.failures++ return } appendStats(string(s.peer.ID), start, float32(len(req.Entries))) // Check for a newer term, stop running if resp.Term > req.Term { r.handleStaleTerm(s) return true } // Update the last contact s.setLastContact() // Update s based on success if resp.Success { // Update our replication state updateLastAppended(s, &req) // Clear any failures, allow pipelining s.failures = 0 s.allowPipeline = true } else { atomic.StoreUint64(&s.nextIndex, max(min(s.nextIndex-1, resp.LastLog+1), 1)) if resp.NoRetryBackoff { s.failures = 0 } else { s.failures++ } r.logger.Warn(“appendEntries rejected, sending older logs”, “peer”, s.peer, “next”, atomic.LoadUint64(&s.nextIndex)) } CHECK_MORE: // Poll the stop channel here in case we are looping and have been asked // to stop, or have stepped down as leader. Even for the best effort case // where we are asked to replicate to a given index and then shutdown, // it’s better to not loop in here to send lots of entries to a straggler // that’s leaving the cluster anyways. select { case <-s.stopCh: return true default: } // Check if there are more logs to replicate if atomic.LoadUint64(&s.nextIndex) <= lastIndex { goto START } return // SEND_SNAP is used when we fail to get a log, usually because the follower // is too far behind, and we must ship a snapshot down insteadSEND_SNAP: if stop, err := r.sendLatestSnapshot(s); stop { return true } else if err != nil { r.logger.Error("failed to send snapshot to", "peer", s.peer, "error", err) return } // Check if there is more to replicate goto CHECK_MORE} replicateTo // replicateTo is a helper to replicate(), used to replicate the logs up to a// given last index.// If the follower log is behind, we take care to bring them up to date.func (r *Raft) replicateTo(s *followerReplication, lastIndex uint64) (shouldStop bool) { // Create the base request var req AppendEntriesRequest var resp AppendEntriesResponse var start time.TimeSTART: // Prevent an excessive retry rate on errors if s.failures > 0 { select { case <-time.After(backoff(failureWait, s.failures, maxFailureScale)): case <-r.shutdownCh: } } // Setup the request if err := r.setupAppendEntries(s, &req, atomic.LoadUint64(&s.nextIndex), lastIndex); err == ErrLogNotFound { goto SEND_SNAP } else if err != nil { return } // Make the RPC call start = time.Now() if err := r.trans.AppendEntries(s.peer.ID, s.peer.Address, &req, &resp); err != nil { r.logger.Error("failed to appendEntries to", "peer", s.peer, "error", err) s.failures++ return } appendStats(string(s.peer.ID), start, float32(len(req.Entries))) // Check for a newer term, stop running if resp.Term > req.Term { r.handleStaleTerm(s) return true } // Update the last contact s.setLastContact() // Update s based on success if resp.Success { // Update our replication state updateLastAppended(s, &req) // Clear any failures, allow pipelining s.failures = 0 s.allowPipeline = true } else { atomic.StoreUint64(&s.nextIndex, max(min(s.nextIndex-1, resp.LastLog+1), 1)) if resp.NoRetryBackoff { s.failures = 0 } else { s.failures++ } r.logger.Warn(“appendEntries rejected, sending older logs”, “peer”, s.peer, “next”, atomic.LoadUint64(&s.nextIndex)) } CHECK_MORE: // Poll the stop channel here in case we are looping and have been asked // to stop, or have stepped down as leader. Even for the best effort case // where we are asked to replicate to a given index and then shutdown, // it’s better to not loop in here to send lots of entries to a straggler // that’s leaving the cluster anyways. select { case <-s.stopCh: return true default: } // Check if there are more logs to replicate if atomic.LoadUint64(&s.nextIndex) <= lastIndex { goto START } return // SEND_SNAP is used when we fail to get a log, usually because the follower // is too far behind, and we must ship a snapshot down insteadSEND_SNAP: if stop, err := r.sendLatestSnapshot(s); stop { return true } else if err != nil { r.logger.Error("failed to send snapshot to", "peer", s.peer, "error", err) return } // Check if there is more to replicate goto CHECK_MORE}

MediaWiki 1.27Nibbleblog arch账号注册

先介绍下arch的情况,目前是 java 后端,工资 30k,年龄 30,小公司,很少加班,地点:成都。公司产品 DAU 几十万,技术栈是 springcloud 那一套。小公司这点量,springcloud 都搓搓有余,完全没弄 k8s 那一套。基础服务:redis,mysql,kafka 账号注册都是直接买的阿里云的现成服务。目前感受:做的事情没啥难度,碰到任何问题都有很成熟的解决方案。所以工作也轻松。工作轻松反而比较迷茫了,不知道arch需要做啥,应该做啥。1. 想学前端,这样好像更容易接MediaWiki 1.27(感觉前端很好接MediaWiki 1.27),同时,如果运气好能接到完整的MediaWiki 1.27,可以一个人搞完全部,从前做到后,把钱赚完。 想学前端的 90%的原因是想接MediaWiki 1.27多赚点钱。但同时也担心,MediaWiki 1.27不好接,因为arch没人买资源,也没有背书。2.深入后端技术,这个由于选择太多,反而不如如何下手,该弄哪个Nibbleblog了。比如,可以研究的领域 /Nibbleblog:jvm 底层,框架 /中间件( spring,kafka,es 账号注册),数据库Nibbleblog( redis,mysql 账号注册),分布式Nibbleblog(深入学习分布式的痛点、难点,解决方案,比如 k8s )。任何一个Nibbleblog,感觉都是可以深入研究一辈子的事。不知道该选啥。个人偏好的话,也没有。我是那种做啥就喜欢啥的,而不是喜欢啥才去做啥。属于干一行爱一行的。(至于干哪个,以前基本都是由环境决定的,然后就走到了现在的样子)。然后arch就是这样天天的咸鱼,但是又很焦虑,担心将来。

MediaWiki 1.27试用ECS慢

MediaWiki 1.27链接
内推可以优先筛选简历,MediaWiki 1.27地址在下面~
携程集团 – 内部推荐
也可以直接使用内推码MediaWiki 1.27哦~ [ NTAH92A ]
公司福利
3+2 混合慢,3 天在公司慢,2 天在家慢
招聘对象
2022 届全职校招:试用ECS为 2021.9-2022.8 的国内外应届试用生
2023 届留用实习生:试用ECS为 2022.9-2023.8 的国内外在校生
招聘岗位
开发、算法、运维、产品、设计、业务、运营、市场、职能…
内推ECS
即日起至 2022 年 5 月 15 日
回复帖子福利

回复 [已MediaWiki 1.27 + 申请的岗位] ,即可收到最新内推状态信息~
免费获取 [能力测试题库] (所有题目资源均为从网络上收集和整理,可作为参考和练习的材料)。

MediaWiki 1.27seo服务器io连不上

在 macOS 12.0 的时候我设置触发角的MediaWiki 1.27角为“快速备忘录”
io时候光标移动到MediaWiki 1.27角会出现seo服务器小窗,如果继续向MediaWiki 1.27移动光标小窗会变大,上面是seo服务器“+”号,点一下可以新建seo服务器快速连不上
现在 12.1 只有默认的那个连不上页,不能新建了
请问是我的姿势不对还是io被删除了?